Commit eb93322b authored by mashun1's avatar mashun1
Browse files

dtk24.04.1

parents
# AlphaFold v2.3.0
This technical note describes updates in the code and model weights that were
made to produce AlphaFold v2.3.0 including updated training data.
We have fine-tuned new AlphaFold-Multimer weights using identical model
architecture but a new training cutoff of 2021-09-30. Previously released
versions of AlphaFold and AlphaFold-Multimer were trained using PDB structures
with a release date before 2018-04-30, a cutoff date chosen to coincide with the
start of the 2018 CASP13 assessment. The new training cutoff represents ~30%
more data to train AlphaFold and more importantly includes much more data on
large protein complexes. The new training cutoff includes 4× the number of
electron microscopy structures and in aggregate twice the number of large
structures (more than 2,000 residues)[^1]. Due to the significant increase in
the number of large structures, we are also able to increase the size of
training crops (subsets of the structure used to train AlphaFold) from 384 to
640 residues. These new AlphaFold-Multimer models are expected to be
substantially more accurate on large protein complexes even though we use the
same model architecture and training methodology as our previously released
AlphaFold-Multimer paper.
These models were initially developed in response to a request from the CASP
organizers to better understand baselines for the progress of structure
prediction in CASP15, and because of the significant increase in accuracy for
large targets, we are making them available as the default multimer models.
Since they were developed as baselines, we have emphasized minimal changes to
our previous AlphaFold-Multimer system while accommodating larger complexes.
In particular, we increase the number of chains used at training time from 8 to
20 and increase the maximum number of MSA sequences from 1,152 to 2,048 for 3 of
the 5 AlphaFold-Multimer models.
For the CASP15 baseline, we also used somewhat more expensive inference settings
that have been found externally to improve AlphaFold accuracy. We increase the
number of seeds per model to 20[^2] and increase the maximum number of
recyclings to 20 with early stopping[^3]. Increasing the number of seeds to 20
is recommended for very large or difficult targets but is not the default due to
increased computational time.
Overall, we expect these new models to be the preferred models whenever the
stoichiometry of the complex is known, including known monomeric structures. In
cases where the stoichiometry is unknown, such as in genome-scale prediction, it
is likely that single chain AlphaFold will be more accurate on average unless
the chain has several thousand residues.
The predicted structures used for the CASP15 baselines are available
[here](https://github.com/deepmind/alphafold/blob/main/docs/casp15_predictions.zip).
[^1]: wwPDB Consortium. "Protein Data Bank: the single global archive for 3D
macromolecular structure data." Nucleic Acids Res. 47, D520–D528 (2018).
[^2]: Johansson-Åkhe, Isak, and Björn Wallner. "Improving peptide-protein
docking with AlphaFold-Multimer using forced sampling." Frontiers in
bioinformatics 2 (2022): 959160-959160.
[^3]: Gao, Mu, et al. "AF2Complex predicts direct physical interactions in
multimeric proteins with deep learning." Nature communications 13.1 (2022):
1-13.
image.png

130 KB

>T1029 EbsA, Cyanobacteria, 125 residues|
MRIDELVPADPRAVSLYTPYYSQANRRRYLPYALSLYQGSSIEGSRAVEGGAPISFVATWTVTPLPADMTRCHLQFNNDAELTYEILLPNHEFLEYLIDMLMGYQRMQKTDFPGAFYRRLLGYDS
>H1045 PEX4-PEX22, Arabidopsis thaliana, subunit 1, 157 residues;
MQASRARLFKEYKEVQREKVADPDIQLICDDTNIFKWTALIKGPSETPYEGGVFQLAFSVPEPYPLQPPQVRFLTKIFHPNVHFKTGEICLDILKNAWSPAWTLQSVCRAIIALMAHPEPDSPLNCDSGNLLRSGDVRGFNSMAQMYTRLAAMPKKG
>H1045 PEX4-PEX22, Arabidopsis thaliana, subunit 2, 173 residues;
AVQDVVDQFFQPVKPTLGQIVRQKLSEGRKVTCRLLGVILEETSPEELQKQATVRSSVLEVLLEITKYSDLYLMERVLDDESEAKVLQALENAGVFTSGGLVKDKVLFCSTEIGRTSFVRQLEPDWHIDTNPEISTQLARFIKYQLHVATVKPERTAPNVFTSQSIEQFFGSV
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "pc5-mbsX9PZC"
},
"source": [
"# AlphaFold Colab\n",
"\n",
"This Colab notebook allows you to easily predict the structure of a protein using a slightly simplified version of [AlphaFold v2.3.2](https://doi.org/10.1038/s41586-021-03819-2). \n",
"\n",
"**Differences to AlphaFold v2.3.2**\n",
"\n",
"In comparison to AlphaFold v2.3.2, this Colab notebook uses **no templates (homologous structures)** and a selected portion of the [BFD database](https://bfd.mmseqs.com/). We have validated these changes on several thousand recent PDB structures. While accuracy will be near-identical to the full AlphaFold system on many targets, a small fraction have a large drop in accuracy due to the smaller MSA and lack of templates. For best reliability, we recommend instead using the [full open source AlphaFold](https://github.com/deepmind/alphafold/), or the [AlphaFold Protein Structure Database](https://alphafold.ebi.ac.uk/).\n",
"\n",
"**This Colab has a small drop in average accuracy for multimers compared to local AlphaFold installation, for full multimer accuracy it is highly recommended to run [AlphaFold locally](https://github.com/deepmind/alphafold#running-alphafold).** Moreover, the AlphaFold-Multimer requires searching for MSA for every unique sequence in the complex, hence it is substantially slower. If your notebook times-out due to slow multimer MSA search, we recommend either using Colab Pro or running AlphaFold locally.\n",
"\n",
"Please note that this Colab notebook is provided for theoretical modelling only and caution should be exercised in its use. \n",
"\n",
"The **PAE file format** has been updated to match AFDB. Please see the [AFDB FAQ](https://alphafold.ebi.ac.uk/faq/#faq-7) for a description of the new format.\n",
"\n",
"**Citing this work**\n",
"\n",
"Any publication that discloses findings arising from using this notebook should [cite](https://github.com/deepmind/alphafold/#citing-this-work) the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2).\n",
"\n",
"**Licenses**\n",
"\n",
"This Colab uses the [AlphaFold model parameters](https://github.com/deepmind/alphafold/#model-parameters-license) which are subject to the Creative Commons Attribution 4.0 International ([CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode)) license. The Colab itself is provided under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). See the full license statement below.\n",
"\n",
"**More information**\n",
"\n",
"You can find more information about how AlphaFold works in the following papers:\n",
"\n",
"* [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2)\n",
"* [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1)\n",
"* [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1)\n",
"\n",
"FAQ on how to interpret AlphaFold predictions are [here](https://alphafold.ebi.ac.uk/faq).\n",
"\n",
"If you have any questions not covered in the FAQ, please contact the AlphaFold team at alphafold@deepmind.com.\n",
"\n",
"**Get in touch**\n",
"\n",
"We would love to hear your feedback and understand how AlphaFold has been useful in your research. Share your stories with us at alphafold@deepmind.com.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uC1dKAwk2eyl"
},
"source": [
"## Setup\n",
"\n",
"Start by running the 2 cells below to set up AlphaFold and all required software."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "woIxeCPygt7K"
},
"outputs": [],
"source": [
"# Set environment variables before running any other code.\n",
"import os\n",
"os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'\n",
"os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '4.0'\n",
"\n",
"#@title 1. Install third-party software\n",
"\n",
"#@markdown Please execute this cell by pressing the _Play_ button\n",
"#@markdown on the left to download and import third-party software\n",
"#@markdown in this Colab notebook. (See the [acknowledgements](https://github.com/deepmind/alphafold/#acknowledgements) in our readme.)\n",
"\n",
"#@markdown **Note**: This installs the software on the Colab\n",
"#@markdown notebook in the cloud and not on your computer.\n",
"\n",
"from IPython.utils import io\n",
"import os\n",
"import subprocess\n",
"import tqdm.notebook\n",
"\n",
"TQDM_BAR_FORMAT = '{l_bar}{bar}| {n_fmt}/{total_fmt} [elapsed: {elapsed} remaining: {remaining}]'\n",
"\n",
"try:\n",
" with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
" with io.capture_output() as captured:\n",
" # Uninstall default Colab version of TF.\n",
" %shell pip uninstall -y tensorflow keras\n",
"\n",
" %shell sudo apt install --quiet --yes hmmer\n",
" pbar.update(6)\n",
"\n",
" # Install py3dmol.\n",
" %shell pip install py3dmol\n",
" pbar.update(2)\n",
"\n",
" # Install OpenMM and pdbfixer.\n",
" %shell rm -rf /opt/conda\n",
" %shell wget -q -P /tmp \\\n",
" https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \\\n",
" \u0026\u0026 bash /tmp/Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \\\n",
" \u0026\u0026 rm /tmp/Miniconda3-latest-Linux-x86_64.sh\n",
" pbar.update(9)\n",
"\n",
" PATH=%env PATH\n",
" %env PATH=/opt/conda/bin:{PATH}\n",
" %shell conda install -qy conda==24.1.2 \\\n",
" \u0026\u0026 conda install -qy -c conda-forge \\\n",
" python=3.10 \\\n",
" openmm=8.0.0 \\\n",
" pdbfixer\n",
" pbar.update(80)\n",
"\n",
" # Create a ramdisk to store a database chunk to make Jackhmmer run fast.\n",
" %shell sudo mkdir -m 777 --parents /tmp/ramdisk\n",
" %shell sudo mount -t tmpfs -o size=9G ramdisk /tmp/ramdisk\n",
" pbar.update(2)\n",
"\n",
" %shell wget -q -P /content \\\n",
" https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt\n",
" pbar.update(1)\n",
"except subprocess.CalledProcessError:\n",
" print(captured)\n",
" raise\n",
"\n",
"executed_cells = set([1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "VzJ5iMjTtoZw"
},
"outputs": [],
"source": [
"#@title 2. Download AlphaFold\n",
"\n",
"#@markdown Please execute this cell by pressing the *Play* button on\n",
"#@markdown the left.\n",
"\n",
"GIT_REPO = 'https://github.com/deepmind/alphafold'\n",
"SOURCE_URL = 'https://storage.googleapis.com/alphafold/alphafold_params_colab_2022-12-06.tar'\n",
"PARAMS_DIR = './alphafold/data/params'\n",
"PARAMS_PATH = os.path.join(PARAMS_DIR, os.path.basename(SOURCE_URL))\n",
"\n",
"try:\n",
" with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
" with io.capture_output() as captured:\n",
" %shell rm -rf alphafold\n",
" %shell git clone --branch main {GIT_REPO} alphafold\n",
" pbar.update(8)\n",
" # Install the required versions of all dependencies.\n",
" %shell pip3 install -r ./alphafold/requirements.txt\n",
" # Run setup.py to install only AlphaFold.\n",
" %shell pip3 install --no-dependencies ./alphafold\n",
" %shell pip3 install pyopenssl==22.0.0\n",
" pbar.update(10)\n",
"\n",
" # Make sure stereo_chemical_props.txt is in all locations where it could be searched for.\n",
" %shell mkdir -p /content/alphafold/alphafold/common\n",
" %shell cp -f /content/stereo_chemical_props.txt /content/alphafold/alphafold/common\n",
" %shell mkdir -p /opt/conda/lib/python3.10/site-packages/alphafold/common/\n",
" %shell cp -f /content/stereo_chemical_props.txt /opt/conda/lib/python3.10/site-packages/alphafold/common/\n",
"\n",
" # Load parameters\n",
" %shell mkdir --parents \"{PARAMS_DIR}\"\n",
" %shell wget -O \"{PARAMS_PATH}\" \"{SOURCE_URL}\"\n",
" pbar.update(27)\n",
"\n",
" %shell tar --extract --verbose --file=\"{PARAMS_PATH}\" \\\n",
" --directory=\"{PARAMS_DIR}\" --preserve-permissions\n",
" %shell rm \"{PARAMS_PATH}\"\n",
" pbar.update(55)\n",
"except subprocess.CalledProcessError:\n",
" print(captured)\n",
" raise\n",
"\n",
"import jax\n",
"if jax.local_devices()[0].platform == 'tpu':\n",
" raise RuntimeError('Colab TPU runtime not supported. Change it to GPU via Runtime -\u003e Change Runtime Type -\u003e Hardware accelerator -\u003e GPU.')\n",
"elif jax.local_devices()[0].platform == 'cpu':\n",
" raise RuntimeError('Colab CPU runtime not supported. Change it to GPU via Runtime -\u003e Change Runtime Type -\u003e Hardware accelerator -\u003e GPU.')\n",
"else:\n",
" print(f'Running with {jax.local_devices()[0].device_kind} GPU')\n",
"\n",
"# Make sure everything we need is on the path.\n",
"import sys\n",
"sys.path.append('/opt/conda/lib/python3.10/site-packages')\n",
"sys.path.append('/content/alphafold')\n",
"\n",
"executed_cells.add(2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "W4JpOs6oA-QS"
},
"source": [
"## Making a prediction\n",
"\n",
"Please paste the sequence of your protein in the text box below, then run the remaining cells via _Runtime_ \u003e _Run after_. You can also run the cells individually by pressing the _Play_ button on the left.\n",
"\n",
"Note that the search against databases and the actual prediction can take some time, from minutes to hours, depending on the length of the protein and what type of GPU you are allocated by Colab (see FAQ below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "rowN0bVYLe9n"
},
"outputs": [],
"source": [
"#@title 3. Enter the amino acid sequence(s) to fold ⬇️\n",
"#@markdown Enter the amino acid sequence(s) to fold:\n",
"#@markdown * If you enter only a single sequence, the monomer model will be \n",
"#@markdown used (unless you override this below).\n",
"#@markdown * If you enter multiple sequences, the multimer model will be used.\n",
"\n",
"from alphafold.notebooks import notebook_utils\n",
"# Track cell execution to ensure correct order.\n",
"notebook_utils.check_cell_execution_order(executed_cells, 3)\n",
"\n",
"import enum\n",
"\n",
"@enum.unique\n",
"class ModelType(enum.Enum):\n",
" MONOMER = 0\n",
" MULTIMER = 1\n",
"\n",
"sequence_1 = 'MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH' #@param {type:\"string\"}\n",
"sequence_2 = '' #@param {type:\"string\"}\n",
"sequence_3 = '' #@param {type:\"string\"}\n",
"sequence_4 = '' #@param {type:\"string\"}\n",
"sequence_5 = '' #@param {type:\"string\"}\n",
"sequence_6 = '' #@param {type:\"string\"}\n",
"sequence_7 = '' #@param {type:\"string\"}\n",
"sequence_8 = '' #@param {type:\"string\"}\n",
"sequence_9 = '' #@param {type:\"string\"}\n",
"sequence_10 = '' #@param {type:\"string\"}\n",
"sequence_11 = '' #@param {type:\"string\"}\n",
"sequence_12 = '' #@param {type:\"string\"}\n",
"sequence_13 = '' #@param {type:\"string\"}\n",
"sequence_14 = '' #@param {type:\"string\"}\n",
"sequence_15 = '' #@param {type:\"string\"}\n",
"sequence_16 = '' #@param {type:\"string\"}\n",
"sequence_17 = '' #@param {type:\"string\"}\n",
"sequence_18 = '' #@param {type:\"string\"}\n",
"sequence_19 = '' #@param {type:\"string\"}\n",
"sequence_20 = '' #@param {type:\"string\"}\n",
"\n",
"input_sequences = (\n",
" sequence_1, sequence_2, sequence_3, sequence_4, sequence_5, \n",
" sequence_6, sequence_7, sequence_8, sequence_9, sequence_10,\n",
" sequence_11, sequence_12, sequence_13, sequence_14, sequence_15, \n",
" sequence_16, sequence_17, sequence_18, sequence_19, sequence_20)\n",
"\n",
"MIN_PER_SEQUENCE_LENGTH = 16\n",
"MAX_PER_SEQUENCE_LENGTH = 4000\n",
"MAX_MONOMER_MODEL_LENGTH = 2500\n",
"MAX_LENGTH = 4000\n",
"MAX_VALIDATED_LENGTH = 3000\n",
"\n",
"#@markdown Select this checkbox to run the multimer model for a single sequence.\n",
"#@markdown For proteins that are monomeric in their native form, or for very \n",
"#@markdown large single chains you may get better accuracy and memory efficiency\n",
"#@markdown by using the multimer model.\n",
"#@markdown \n",
"#@markdown \n",
"#@markdown Due to improved memory efficiency the multimer model has a maximum\n",
"#@markdown limit of 4000 residues, while the monomer model has a limit of 2500\n",
"#@markdown residues.\n",
"\n",
"use_multimer_model_for_monomers = False #@param {type:\"boolean\"}\n",
"\n",
"# Validate the input sequences.\n",
"sequences = notebook_utils.clean_and_validate_input_sequences(\n",
" input_sequences=input_sequences,\n",
" min_sequence_length=MIN_PER_SEQUENCE_LENGTH,\n",
" max_sequence_length=MAX_PER_SEQUENCE_LENGTH)\n",
"\n",
"if len(sequences) == 1:\n",
" if use_multimer_model_for_monomers:\n",
" print('Using the multimer model for single-chain, as requested.')\n",
" model_type_to_use = ModelType.MULTIMER\n",
" else:\n",
" print('Using the single-chain model.')\n",
" model_type_to_use = ModelType.MONOMER\n",
"else:\n",
" print(f'Using the multimer model with {len(sequences)} sequences.')\n",
" model_type_to_use = ModelType.MULTIMER\n",
"\n",
"# Check whether total length exceeds limit.\n",
"total_sequence_length = sum([len(seq) for seq in sequences])\n",
"if total_sequence_length \u003e MAX_LENGTH:\n",
" raise ValueError('The total sequence length is too long: '\n",
" f'{total_sequence_length}, while the maximum is '\n",
" f'{MAX_LENGTH}.')\n",
"\n",
"# Check whether we exceed the monomer limit.\n",
"if model_type_to_use == ModelType.MONOMER:\n",
" if len(sequences[0]) \u003e MAX_MONOMER_MODEL_LENGTH:\n",
" raise ValueError(\n",
" f'Input sequence is too long: {len(sequences[0])} amino acids, while '\n",
" f'the maximum for the monomer model is {MAX_MONOMER_MODEL_LENGTH}. You may '\n",
" 'be able to run this sequence with the multimer model by selecting the '\n",
" 'use_multimer_model_for_monomers checkbox above.')\n",
" \n",
"if total_sequence_length \u003e MAX_VALIDATED_LENGTH:\n",
" print('WARNING: The accuracy of the system has not been fully validated '\n",
" 'above 3000 residues, and you may experience long running times or '\n",
" f'run out of memory. Total sequence length is {total_sequence_length} '\n",
" 'residues.')\n",
"\n",
"executed_cells.add(3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "2tTeTTsLKPjB"
},
"outputs": [],
"source": [
"#@title 4. Search against genetic databases\n",
"\n",
"#@markdown Once this cell has been executed, you will see\n",
"#@markdown statistics about the multiple sequence alignment\n",
"#@markdown (MSA) that will be used by AlphaFold. In particular,\n",
"#@markdown you’ll see how well each residue is covered by similar\n",
"#@markdown sequences in the MSA.\n",
"\n",
"# Track cell execution to ensure correct order\n",
"notebook_utils.check_cell_execution_order(executed_cells, 4)\n",
"\n",
"# --- Python imports ---\n",
"import collections\n",
"import copy\n",
"from concurrent import futures\n",
"import json\n",
"import random\n",
"import shutil\n",
"\n",
"from urllib import request\n",
"from google.colab import files\n",
"from matplotlib import gridspec\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import py3Dmol\n",
"\n",
"from alphafold.model import model\n",
"from alphafold.model import config\n",
"from alphafold.model import data\n",
"\n",
"from alphafold.data import feature_processing\n",
"from alphafold.data import msa_pairing\n",
"from alphafold.data import pipeline\n",
"from alphafold.data import pipeline_multimer\n",
"from alphafold.data.tools import jackhmmer\n",
"\n",
"from alphafold.common import confidence\n",
"from alphafold.common import protein\n",
"\n",
"from alphafold.relax import relax\n",
"from alphafold.relax import utils\n",
"\n",
"from IPython import display\n",
"from ipywidgets import GridspecLayout\n",
"from ipywidgets import Output\n",
"\n",
"# Color bands for visualizing plddt\n",
"PLDDT_BANDS = [(0, 50, '#FF7D45'),\n",
" (50, 70, '#FFDB13'),\n",
" (70, 90, '#65CBF3'),\n",
" (90, 100, '#0053D6')]\n",
"\n",
"# --- Find the closest source ---\n",
"test_url_pattern = 'https://storage.googleapis.com/alphafold-colab{:s}/latest/uniref90_2022_01.fasta.1'\n",
"ex = futures.ThreadPoolExecutor(3)\n",
"def fetch(source):\n",
" request.urlretrieve(test_url_pattern.format(source))\n",
" return source\n",
"fs = [ex.submit(fetch, source) for source in ['', '-europe', '-asia']]\n",
"source = None\n",
"for f in futures.as_completed(fs):\n",
" source = f.result()\n",
" ex.shutdown()\n",
" break\n",
"\n",
"JACKHMMER_BINARY_PATH = '/usr/bin/jackhmmer'\n",
"DB_ROOT_PATH = f'https://storage.googleapis.com/alphafold-colab{source}/latest/'\n",
"# The z_value is the number of sequences in a database.\n",
"MSA_DATABASES = [\n",
" {'db_name': 'uniref90',\n",
" 'db_path': f'{DB_ROOT_PATH}uniref90_2022_01.fasta',\n",
" 'num_streamed_chunks': 62,\n",
" 'z_value': 144_113_457},\n",
" {'db_name': 'smallbfd',\n",
" 'db_path': f'{DB_ROOT_PATH}bfd-first_non_consensus_sequences.fasta',\n",
" 'num_streamed_chunks': 17,\n",
" 'z_value': 65_984_053},\n",
" {'db_name': 'mgnify',\n",
" 'db_path': f'{DB_ROOT_PATH}mgy_clusters_2022_05.fasta',\n",
" 'num_streamed_chunks': 120,\n",
" 'z_value': 623_796_864},\n",
"]\n",
"\n",
"# Search UniProt and construct the all_seq features only for heteromers, not homomers.\n",
"if model_type_to_use == ModelType.MULTIMER and len(set(sequences)) \u003e 1:\n",
" MSA_DATABASES.extend([\n",
" # Swiss-Prot and TrEMBL are concatenated together as UniProt.\n",
" {'db_name': 'uniprot',\n",
" 'db_path': f'{DB_ROOT_PATH}uniprot_2021_04.fasta',\n",
" 'num_streamed_chunks': 101,\n",
" 'z_value': 225_013_025 + 565_928},\n",
" ])\n",
"\n",
"TOTAL_JACKHMMER_CHUNKS = sum([cfg['num_streamed_chunks'] for cfg in MSA_DATABASES])\n",
"\n",
"MAX_HITS = {\n",
" 'uniref90': 10_000,\n",
" 'smallbfd': 5_000,\n",
" 'mgnify': 501,\n",
" 'uniprot': 50_000,\n",
"}\n",
"\n",
"\n",
"def get_msa(sequences):\n",
" \"\"\"Searches for MSA for given sequences using chunked Jackhmmer search.\n",
" \n",
" Args:\n",
" sequences: A list of sequences to search against all databases.\n",
"\n",
" Returns:\n",
" A dictionary mapping unique sequences to dicionaries mapping each database\n",
" to a list of results, one for each chunk of the database.\n",
" \"\"\"\n",
" sequence_to_fasta_path = {}\n",
" # Deduplicate to not do redundant work for multiple copies of the same chain in homomers.\n",
" for sequence_index, sequence in enumerate(sorted(set(sequences)), 1):\n",
" fasta_path = f'target_{sequence_index:02d}.fasta'\n",
" with open(fasta_path, 'wt') as f:\n",
" f.write(f'\u003equery\\n{sequence}')\n",
" sequence_to_fasta_path[sequence] = fasta_path\n",
"\n",
" # Run the search against chunks of genetic databases (since the genetic\n",
" # databases don't fit in Colab disk).\n",
" raw_msa_results = {sequence: {} for sequence in sequence_to_fasta_path.keys()}\n",
" print('\\nGetting MSA for all sequences')\n",
" with tqdm.notebook.tqdm(total=TOTAL_JACKHMMER_CHUNKS, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
" def jackhmmer_chunk_callback(i):\n",
" pbar.update(n=1)\n",
"\n",
" for db_config in MSA_DATABASES:\n",
" db_name = db_config['db_name']\n",
" pbar.set_description(f'Searching {db_name}')\n",
" jackhmmer_runner = jackhmmer.Jackhmmer(\n",
" binary_path=JACKHMMER_BINARY_PATH,\n",
" database_path=db_config['db_path'],\n",
" get_tblout=True,\n",
" num_streamed_chunks=db_config['num_streamed_chunks'],\n",
" streaming_callback=jackhmmer_chunk_callback,\n",
" z_value=db_config['z_value'])\n",
" # Query all unique sequences against each chunk of the database to prevent\n",
" # redunantly fetching each chunk for each unique sequence.\n",
" results = jackhmmer_runner.query_multiple(list(sequence_to_fasta_path.values()))\n",
" for sequence, result_for_sequence in zip(sequence_to_fasta_path.keys(), results):\n",
" raw_msa_results[sequence][db_name] = result_for_sequence\n",
"\n",
" return raw_msa_results\n",
"\n",
"\n",
"features_for_chain = {}\n",
"raw_msa_results_for_sequence = get_msa(sequences)\n",
"for sequence_index, sequence in enumerate(sequences, start=1):\n",
" raw_msa_results = copy.deepcopy(raw_msa_results_for_sequence[sequence])\n",
"\n",
" # Extract the MSAs from the Stockholm files.\n",
" # NB: deduplication happens later in pipeline.make_msa_features.\n",
" single_chain_msas = []\n",
" uniprot_msa = None\n",
" for db_name, db_results in raw_msa_results.items():\n",
" merged_msa = notebook_utils.merge_chunked_msa(\n",
" results=db_results, max_hits=MAX_HITS.get(db_name))\n",
" if merged_msa.sequences and db_name != 'uniprot':\n",
" single_chain_msas.append(merged_msa)\n",
" msa_size = len(set(merged_msa.sequences))\n",
" print(f'{msa_size} unique sequences found in {db_name} for sequence {sequence_index}')\n",
" elif merged_msa.sequences and db_name == 'uniprot':\n",
" uniprot_msa = merged_msa\n",
"\n",
" notebook_utils.show_msa_info(single_chain_msas=single_chain_msas, sequence_index=sequence_index)\n",
"\n",
" # Turn the raw data into model features.\n",
" feature_dict = {}\n",
" feature_dict.update(pipeline.make_sequence_features(\n",
" sequence=sequence, description='query', num_res=len(sequence)))\n",
" feature_dict.update(pipeline.make_msa_features(msas=single_chain_msas))\n",
" # We don't use templates in AlphaFold Colab notebook, add only empty placeholder features.\n",
" feature_dict.update(notebook_utils.empty_placeholder_template_features(\n",
" num_templates=0, num_res=len(sequence)))\n",
"\n",
" # Construct the all_seq features only for heteromers, not homomers.\n",
" if model_type_to_use == ModelType.MULTIMER and len(set(sequences)) \u003e 1:\n",
" valid_feats = msa_pairing.MSA_FEATURES + (\n",
" 'msa_species_identifiers',\n",
" )\n",
" all_seq_features = {\n",
" f'{k}_all_seq': v for k, v in pipeline.make_msa_features([uniprot_msa]).items()\n",
" if k in valid_feats}\n",
" feature_dict.update(all_seq_features)\n",
"\n",
" features_for_chain[protein.PDB_CHAIN_IDS[sequence_index - 1]] = feature_dict\n",
"\n",
"\n",
"# Do further feature post-processing depending on the model type.\n",
"if model_type_to_use == ModelType.MONOMER:\n",
" np_example = features_for_chain[protein.PDB_CHAIN_IDS[0]]\n",
"\n",
"elif model_type_to_use == ModelType.MULTIMER:\n",
" all_chain_features = {}\n",
" for chain_id, chain_features in features_for_chain.items():\n",
" all_chain_features[chain_id] = pipeline_multimer.convert_monomer_features(\n",
" chain_features, chain_id)\n",
"\n",
" all_chain_features = pipeline_multimer.add_assembly_features(all_chain_features)\n",
"\n",
" np_example = feature_processing.pair_and_merge(\n",
" all_chain_features=all_chain_features)\n",
"\n",
" # Pad MSA to avoid zero-sized extra_msa.\n",
" np_example = pipeline_multimer.pad_msa(np_example, min_num_seq=512)\n",
"\n",
"executed_cells.add(4)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "XUo6foMQxwS2"
},
"outputs": [],
"source": [
"#@title 5. Run AlphaFold and download prediction\n",
"\n",
"#@markdown Once this cell has been executed, a zip-archive with\n",
"#@markdown the obtained prediction will be automatically downloaded\n",
"#@markdown to your computer.\n",
"\n",
"#@markdown In case you are having issues with the relaxation stage, you can disable it below.\n",
"#@markdown Warning: This means that the prediction might have distracting\n",
"#@markdown small stereochemical violations.\n",
"\n",
"run_relax = True #@param {type:\"boolean\"}\n",
"\n",
"#@markdown Relaxation is faster with a GPU, but we have found it to be less stable.\n",
"#@markdown You may wish to enable GPU for higher performance, but if it doesn't\n",
"#@markdown converge we suggested reverting to using without GPU.\n",
"\n",
"relax_use_gpu = False #@param {type:\"boolean\"}\n",
"\n",
"\n",
"#@markdown The multimer model will continue recycling until the predictions stop\n",
"#@markdown changing, up to the limit set here. For higher accuracy, at the \n",
"#@markdown potential cost of longer inference times, set this to 20.\n",
"\n",
"multimer_model_max_num_recycles = 3 #@param {type:\"integer\"}\n",
"\n",
"# Track cell execution to ensure correct order\n",
"notebook_utils.check_cell_execution_order(executed_cells, 5)\n",
"\n",
"# --- Run the model ---\n",
"if model_type_to_use == ModelType.MONOMER:\n",
" model_names = config.MODEL_PRESETS['monomer'] + ('model_2_ptm',)\n",
"elif model_type_to_use == ModelType.MULTIMER:\n",
" model_names = config.MODEL_PRESETS['multimer']\n",
"\n",
"output_dir = 'prediction'\n",
"os.makedirs(output_dir, exist_ok=True)\n",
"\n",
"plddts = {}\n",
"ranking_confidences = {}\n",
"pae_outputs = {}\n",
"unrelaxed_proteins = {}\n",
"\n",
"with tqdm.notebook.tqdm(total=len(model_names) + 1, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
" for model_name in model_names:\n",
" pbar.set_description(f'Running {model_name}')\n",
"\n",
" cfg = config.model_config(model_name)\n",
"\n",
" if model_type_to_use == ModelType.MONOMER:\n",
" cfg.data.eval.num_ensemble = 1\n",
" elif model_type_to_use == ModelType.MULTIMER:\n",
" cfg.model.num_ensemble_eval = 1\n",
"\n",
" if model_type_to_use == ModelType.MULTIMER:\n",
" cfg.model.num_recycle = multimer_model_max_num_recycles\n",
" cfg.model.recycle_early_stop_tolerance = 0.5\n",
"\n",
" params = data.get_model_haiku_params(model_name, './alphafold/data')\n",
" model_runner = model.RunModel(cfg, params)\n",
" processed_feature_dict = model_runner.process_features(np_example, random_seed=0)\n",
" prediction = model_runner.predict(processed_feature_dict, random_seed=random.randrange(sys.maxsize))\n",
"\n",
" mean_plddt = prediction['plddt'].mean()\n",
"\n",
" if model_type_to_use == ModelType.MONOMER:\n",
" if 'predicted_aligned_error' in prediction:\n",
" pae_outputs[model_name] = (prediction['predicted_aligned_error'],\n",
" prediction['max_predicted_aligned_error'])\n",
" else:\n",
" # Monomer models are sorted by mean pLDDT. Do not put monomer pTM models here as they\n",
" # should never get selected.\n",
" ranking_confidences[model_name] = prediction['ranking_confidence']\n",
" plddts[model_name] = prediction['plddt']\n",
" elif model_type_to_use == ModelType.MULTIMER:\n",
" # Multimer models are sorted by pTM+ipTM.\n",
" ranking_confidences[model_name] = prediction['ranking_confidence']\n",
" plddts[model_name] = prediction['plddt']\n",
" pae_outputs[model_name] = (prediction['predicted_aligned_error'],\n",
" prediction['max_predicted_aligned_error'])\n",
"\n",
" # Set the b-factors to the per-residue plddt.\n",
" final_atom_mask = prediction['structure_module']['final_atom_mask']\n",
" b_factors = prediction['plddt'][:, None] * final_atom_mask\n",
" unrelaxed_protein = protein.from_prediction(\n",
" processed_feature_dict,\n",
" prediction,\n",
" b_factors=b_factors,\n",
" remove_leading_feature_dimension=(\n",
" model_type_to_use == ModelType.MONOMER))\n",
" unrelaxed_proteins[model_name] = unrelaxed_protein\n",
"\n",
" # Delete unused outputs to save memory.\n",
" del model_runner\n",
" del params\n",
" del prediction\n",
" pbar.update(n=1)\n",
"\n",
" # --- AMBER relax the best model ---\n",
"\n",
" # Find the best model according to the mean pLDDT.\n",
" best_model_name = max(ranking_confidences.keys(), key=lambda x: ranking_confidences[x])\n",
"\n",
" if run_relax:\n",
" pbar.set_description(f'AMBER relaxation')\n",
" amber_relaxer = relax.AmberRelaxation(\n",
" max_iterations=0,\n",
" tolerance=2.39,\n",
" stiffness=10.0,\n",
" exclude_residues=[],\n",
" max_outer_iterations=3,\n",
" use_gpu=relax_use_gpu)\n",
" relaxed_pdb, _, _ = amber_relaxer.process(prot=unrelaxed_proteins[best_model_name])\n",
" else:\n",
" print('Warning: Running without the relaxation stage.')\n",
" relaxed_pdb = protein.to_pdb(unrelaxed_proteins[best_model_name])\n",
" pbar.update(n=1) # Finished AMBER relax.\n",
"\n",
"# Construct multiclass b-factors to indicate confidence bands\n",
"# 0=very low, 1=low, 2=confident, 3=very high\n",
"banded_b_factors = []\n",
"for plddt in plddts[best_model_name]:\n",
" for idx, (min_val, max_val, _) in enumerate(PLDDT_BANDS):\n",
" if plddt \u003e= min_val and plddt \u003c= max_val:\n",
" banded_b_factors.append(idx)\n",
" break\n",
"banded_b_factors = np.array(banded_b_factors)[:, None] * final_atom_mask\n",
"to_visualize_pdb = utils.overwrite_b_factors(relaxed_pdb, banded_b_factors)\n",
"\n",
"\n",
"# Write out the prediction\n",
"pred_output_path = os.path.join(output_dir, 'selected_prediction.pdb')\n",
"with open(pred_output_path, 'w') as f:\n",
" f.write(relaxed_pdb)\n",
"\n",
"\n",
"# --- Visualise the prediction \u0026 confidence ---\n",
"show_sidechains = True\n",
"def plot_plddt_legend():\n",
" \"\"\"Plots the legend for pLDDT.\"\"\"\n",
" thresh = ['Very low (pLDDT \u003c 50)',\n",
" 'Low (70 \u003e pLDDT \u003e 50)',\n",
" 'Confident (90 \u003e pLDDT \u003e 70)',\n",
" 'Very high (pLDDT \u003e 90)']\n",
"\n",
" colors = [x[2] for x in PLDDT_BANDS]\n",
"\n",
" plt.figure(figsize=(2, 2))\n",
" for c in colors:\n",
" plt.bar(0, 0, color=c)\n",
" plt.legend(thresh, frameon=False, loc='center', fontsize=20)\n",
" plt.xticks([])\n",
" plt.yticks([])\n",
" ax = plt.gca()\n",
" ax.spines['right'].set_visible(False)\n",
" ax.spines['top'].set_visible(False)\n",
" ax.spines['left'].set_visible(False)\n",
" ax.spines['bottom'].set_visible(False)\n",
" plt.title('Model Confidence', fontsize=20, pad=20)\n",
" return plt\n",
"\n",
"# Show the structure coloured by chain if the multimer model has been used.\n",
"if model_type_to_use == ModelType.MULTIMER:\n",
" multichain_view = py3Dmol.view(width=800, height=600)\n",
" multichain_view.addModelsAsFrames(to_visualize_pdb)\n",
" multichain_style = {'cartoon': {'colorscheme': 'chain'}}\n",
" multichain_view.setStyle({'model': -1}, multichain_style)\n",
" multichain_view.zoomTo()\n",
" multichain_view.show()\n",
"\n",
"# Color the structure by per-residue pLDDT\n",
"color_map = {i: bands[2] for i, bands in enumerate(PLDDT_BANDS)}\n",
"view = py3Dmol.view(width=800, height=600)\n",
"view.addModelsAsFrames(to_visualize_pdb)\n",
"style = {'cartoon': {'colorscheme': {'prop': 'b', 'map': color_map}}}\n",
"if show_sidechains:\n",
" style['stick'] = {}\n",
"view.setStyle({'model': -1}, style)\n",
"view.zoomTo()\n",
"\n",
"grid = GridspecLayout(1, 2)\n",
"out = Output()\n",
"with out:\n",
" view.show()\n",
"grid[0, 0] = out\n",
"\n",
"out = Output()\n",
"with out:\n",
" plot_plddt_legend().show()\n",
"grid[0, 1] = out\n",
"\n",
"display.display(grid)\n",
"\n",
"# Display pLDDT and predicted aligned error (if output by the model).\n",
"if pae_outputs:\n",
" num_plots = 2\n",
"else:\n",
" num_plots = 1\n",
"\n",
"plt.figure(figsize=[8 * num_plots, 6])\n",
"plt.subplot(1, num_plots, 1)\n",
"plt.plot(plddts[best_model_name])\n",
"plt.title('Predicted LDDT')\n",
"plt.xlabel('Residue')\n",
"plt.ylabel('pLDDT')\n",
"\n",
"if num_plots == 2:\n",
" plt.subplot(1, 2, 2)\n",
" pae, max_pae = list(pae_outputs.values())[0]\n",
" plt.imshow(pae, vmin=0., vmax=max_pae, cmap='Greens_r')\n",
" plt.colorbar(fraction=0.046, pad=0.04)\n",
"\n",
" # Display lines at chain boundaries.\n",
" best_unrelaxed_prot = unrelaxed_proteins[best_model_name]\n",
" total_num_res = best_unrelaxed_prot.residue_index.shape[-1]\n",
" chain_ids = best_unrelaxed_prot.chain_index\n",
" for chain_boundary in np.nonzero(chain_ids[:-1] - chain_ids[1:]):\n",
" if chain_boundary.size:\n",
" plt.plot([0, total_num_res], [chain_boundary, chain_boundary], color='red')\n",
" plt.plot([chain_boundary, chain_boundary], [0, total_num_res], color='red')\n",
"\n",
" plt.title('Predicted Aligned Error')\n",
" plt.xlabel('Scored residue')\n",
" plt.ylabel('Aligned residue')\n",
"\n",
"# Save the predicted aligned error (if it exists).\n",
"pae_output_path = os.path.join(output_dir, 'predicted_aligned_error.json')\n",
"if pae_outputs:\n",
" # Save predicted aligned error in the same format as the AF EMBL DB.\n",
" pae_data = confidence.pae_json(pae=pae, max_pae=max_pae.item())\n",
" with open(pae_output_path, 'w') as f:\n",
" f.write(pae_data)\n",
"\n",
"# --- Download the predictions ---\n",
"shutil.make_archive(base_name='prediction', format='zip', root_dir=output_dir)\n",
"files.download(f'{output_dir}.zip')\n",
"\n",
"executed_cells.add(5)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lUQAn5LYC5n4"
},
"source": [
"### Interpreting the prediction\n",
"\n",
"In general predicted LDDT (pLDDT) is best used for intra-domain confidence, whereas Predicted Aligned Error (PAE) is best used for determining between domain or between chain confidence.\n",
"\n",
"Please see the [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2), the [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1), and the [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) as well as [our FAQ](https://alphafold.ebi.ac.uk/faq) on how to interpret AlphaFold predictions."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jeb2z8DIA4om"
},
"source": [
"## FAQ \u0026 Troubleshooting\n",
"\n",
"\n",
"* How do I get a predicted protein structure for my protein?\n",
" * Click on the _Connect_ button on the top right to get started.\n",
" * Paste the amino acid sequence of your protein (without any headers) into the “Enter the amino acid sequence to fold”.\n",
" * Run all cells in the Colab, either by running them individually (with the play button on the left side) or via _Runtime_ \u003e _Run all._ Make sure you run all 5 cells in order.\n",
" * The predicted protein structure will be downloaded once all cells have been executed. Note: This can take minutes to hours - see below.\n",
"* How long will this take?\n",
" * Downloading the AlphaFold source code can take up to a few minutes.\n",
" * Downloading and installing the third-party software can take up to a few minutes.\n",
" * The search against genetic databases can take minutes to hours.\n",
" * Running AlphaFold and generating the prediction can take minutes to hours, depending on the length of your protein and on which GPU-type Colab has assigned you.\n",
"* My Colab no longer seems to be doing anything, what should I do?\n",
" * Some steps may take minutes to hours to complete.\n",
" * If nothing happens or if you receive an error message, try restarting your Colab runtime via _Runtime_ \u003e _Restart runtime_.\n",
" * If this doesn’t help, try resetting your Colab runtime via _Runtime_ \u003e _Factory reset runtime_.\n",
"* How does this compare to the open-source version of AlphaFold?\n",
" * This Colab version of AlphaFold searches a selected portion of the BFD dataset and currently doesn’t use templates, so its accuracy is reduced in comparison to the full version of AlphaFold that is described in the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2) and [Github repo](https://github.com/deepmind/alphafold/) (the full version is available via the inference script).\n",
"* What is a Colab?\n",
" * See the [Colab FAQ](https://research.google.com/colaboratory/faq.html).\n",
"* I received a warning “Notebook requires high RAM”, what do I do?\n",
" * The resources allocated to your Colab vary. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details.\n",
" * You can execute the Colab nonetheless.\n",
"* I received an error “Colab CPU runtime not supported” or “No GPU/TPU found”, what do I do?\n",
" * Colab CPU runtime is not supported. Try changing your runtime via _Runtime_ \u003e _Change runtime type_ \u003e _Hardware accelerator_ \u003e _GPU_.\n",
" * The type of GPU allocated to your Colab varies. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details.\n",
" * If you receive “Cannot connect to GPU backend”, you can try again later to see if Colab allocates you a GPU.\n",
" * [Colab Pro](https://colab.research.google.com/signup) offers priority access to GPUs.\n",
"* I received an error “ModuleNotFoundError: No module named ...”, even though I ran the cell that imports it, what do I do?\n",
" * Colab notebooks on the free tier time out after a certain amount of time. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html#idle-timeouts). Try rerunning the whole notebook from the beginning.\n",
"* Does this tool install anything on my computer?\n",
" * No, everything happens in the cloud on Google Colab.\n",
" * At the end of the Colab execution a zip-archive with the obtained prediction will be automatically downloaded to your computer.\n",
"* How should I share feedback and bug reports?\n",
" * Please share any feedback and bug reports as an [issue](https://github.com/deepmind/alphafold/issues) on Github.\n",
"\n",
"\n",
"## Related work\n",
"\n",
"Take a look at these Colab notebooks provided by the community (please note that these notebooks may vary from our validated AlphaFold system and we cannot guarantee their accuracy):\n",
"\n",
"* The [ColabFold AlphaFold2 notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb) by Sergey Ovchinnikov, Milot Mirdita and Martin Steinegger, which uses an API hosted at the Södinglab based on the MMseqs2 server ([Mirdita et al. 2019, Bioinformatics](https://academic.oup.com/bioinformatics/article/35/16/2856/5280135)) for the multiple sequence alignment creation.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YfPhvYgKC81B"
},
"source": [
"# License and Disclaimer\n",
"\n",
"This is not an officially-supported Google product.\n",
"\n",
"This Colab notebook and other information provided is for theoretical modelling only, caution should be exercised in its use. It is provided ‘as-is’ without any warranty of any kind, whether expressed or implied. Information is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice.\n",
"\n",
"Copyright 2021 DeepMind Technologies Limited.\n",
"\n",
"\n",
"## AlphaFold Code License\n",
"\n",
"Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0.\n",
"\n",
"Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n",
"\n",
"## Model Parameters License\n",
"\n",
"The AlphaFold parameters are made available under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You can find details at: https://creativecommons.org/licenses/by/4.0/legalcode\n",
"\n",
"\n",
"## Third-party software\n",
"\n",
"Use of the third-party software, libraries or code referred to in the [Acknowledgements section](https://github.com/deepmind/alphafold/#acknowledgements) in the AlphaFold README may be governed by separate terms and conditions or license provisions. Your use of the third-party software, libraries or code is subject to any such terms and you should check that you can comply with any applicable restrictions or terms and conditions before use.\n",
"\n",
"\n",
"## Mirrored Databases\n",
"\n",
"The following databases have been mirrored by DeepMind, and are available with reference to the following:\n",
"* UniProt: v2021\\_04 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/).\n",
"* UniRef90: v2022\\_01 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/).\n",
"* MGnify: v2022\\_05 (unmodified), by Mitchell AL et al., available free of all copyright restrictions and made fully and freely available for both non-commercial and commercial use under [CC0 1.0 Universal (CC0 1.0) Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/).\n",
"* BFD: (modified), by Steinegger M. and Söding J., modified by DeepMind, available under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by/4.0/). See the Methods section of the [AlphaFold proteome paper](https://www.nature.com/articles/s41586-021-03828-1) for details."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "AlphaFold.ipynb",
"private_outputs": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
absl-py==1.0.0
biopython==1.79
chex==0.1.86
dm-haiku==0.0.12
dm-tree==0.1.8
docker==5.0.0
immutabledict==2.0.0
jax==0.4.26
ml-collections==0.1.0
numpy==1.24.3
pandas==2.0.3
scipy==1.11.1
tensorflow-cpu==2.16.1
absl-py==1.0.0
biopython==1.79
chex==0.1.86
dm-tree==0.1.8
docker==5.0.0
immutabledict==2.0.0
ml-collections==0.1.0
numpy==1.24.3
pandas==2.0.3
scipy==1.11.1
tensorflow-cpu==2.16.1
matplotlib
cython
\ No newline at end of file
# Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Full AlphaFold protein structure prediction script."""
import enum
import json
import os
import pathlib
import pickle
import random
import shutil
import sys
import time
from typing import Any, Dict, Union
from absl import app
from absl import flags
from absl import logging
from alphafold.common import confidence
from alphafold.common import protein
from alphafold.common import residue_constants
from alphafold.data import pipeline
from alphafold.data import pipeline_multimer
from alphafold.data import templates
from alphafold.data.tools import hhsearch
from alphafold.data.tools import hmmsearch
from alphafold.model import config
from alphafold.model import data
from alphafold.model import model
from alphafold.relax import relax
import jax.numpy as jnp
import numpy as np
# Internal import (7716).
logging.set_verbosity(logging.INFO)
@enum.unique
class ModelsToRelax(enum.Enum):
ALL = 0
BEST = 1
NONE = 2
flags.DEFINE_list(
'fasta_paths', None, 'Paths to FASTA files, each containing a prediction '
'target that will be folded one after another. If a FASTA file contains '
'multiple sequences, then it will be folded as a multimer. Paths should be '
'separated by commas. All FASTA paths must have a unique basename as the '
'basename is used to name the output directories for each prediction.')
flags.DEFINE_string('data_dir', None, 'Path to directory of supporting data.')
flags.DEFINE_string('output_dir', None, 'Path to a directory that will '
'store the results.')
flags.DEFINE_string('jackhmmer_binary_path', shutil.which('jackhmmer'),
'Path to the JackHMMER executable.')
flags.DEFINE_string('hhblits_binary_path', shutil.which('hhblits'),
'Path to the HHblits executable.')
flags.DEFINE_string('hhsearch_binary_path', shutil.which('hhsearch'),
'Path to the HHsearch executable.')
flags.DEFINE_string('hmmsearch_binary_path', shutil.which('hmmsearch'),
'Path to the hmmsearch executable.')
flags.DEFINE_string('hmmbuild_binary_path', shutil.which('hmmbuild'),
'Path to the hmmbuild executable.')
flags.DEFINE_string('kalign_binary_path', shutil.which('kalign'),
'Path to the Kalign executable.')
flags.DEFINE_string('uniref90_database_path', None, 'Path to the Uniref90 '
'database for use by JackHMMER.')
flags.DEFINE_string('mgnify_database_path', None, 'Path to the MGnify '
'database for use by JackHMMER.')
flags.DEFINE_string('bfd_database_path', None, 'Path to the BFD '
'database for use by HHblits.')
flags.DEFINE_string('small_bfd_database_path', None, 'Path to the small '
'version of BFD used with the "reduced_dbs" preset.')
flags.DEFINE_string('uniref30_database_path', None, 'Path to the UniRef30 '
'database for use by HHblits.')
flags.DEFINE_string('uniprot_database_path', None, 'Path to the Uniprot '
'database for use by JackHMMer.')
flags.DEFINE_string('pdb70_database_path', None, 'Path to the PDB70 '
'database for use by HHsearch.')
flags.DEFINE_string('pdb_seqres_database_path', None, 'Path to the PDB '
'seqres database for use by hmmsearch.')
flags.DEFINE_string('template_mmcif_dir', None, 'Path to a directory with '
'template mmCIF structures, each named <pdb_id>.cif')
flags.DEFINE_string('max_template_date', None, 'Maximum template release date '
'to consider. Important if folding historical test sets.')
flags.DEFINE_string('obsolete_pdbs_path', None, 'Path to file containing a '
'mapping from obsolete PDB IDs to the PDB IDs of their '
'replacements.')
flags.DEFINE_enum('db_preset', 'full_dbs',
['full_dbs', 'reduced_dbs'],
'Choose preset MSA database configuration - '
'smaller genetic database config (reduced_dbs) or '
'full genetic database config (full_dbs)')
flags.DEFINE_enum('model_preset', 'monomer',
['monomer', 'monomer_casp14', 'monomer_ptm', 'multimer'],
'Choose preset model configuration - the monomer model, '
'the monomer model with extra ensembling, monomer model with '
'pTM head, or multimer model')
flags.DEFINE_boolean('benchmark', False, 'Run multiple JAX model evaluations '
'to obtain a timing that excludes the compilation time, '
'which should be more indicative of the time required for '
'inferencing many proteins.')
flags.DEFINE_integer('random_seed', None, 'The random seed for the data '
'pipeline. By default, this is randomly generated. Note '
'that even if this is set, Alphafold may still not be '
'deterministic, because processes like GPU inference are '
'nondeterministic.')
flags.DEFINE_integer('num_multimer_predictions_per_model', 5, 'How many '
'predictions (each with a different random seed) will be '
'generated per model. E.g. if this is 2 and there are 5 '
'models then there will be 10 predictions per input. '
'Note: this FLAG only applies if model_preset=multimer')
flags.DEFINE_boolean('use_precomputed_msas', False, 'Whether to read MSAs that '
'have been written to disk instead of running the MSA '
'tools. The MSA files are looked up in the output '
'directory, so it must stay the same between multiple '
'runs that are to reuse the MSAs. WARNING: This will not '
'check if the sequence, database or configuration have '
'changed.')
flags.DEFINE_enum_class('models_to_relax', ModelsToRelax.BEST, ModelsToRelax,
'The models to run the final relaxation step on. '
'If `all`, all models are relaxed, which may be time '
'consuming. If `best`, only the most confident model '
'is relaxed. If `none`, relaxation is not run. Turning '
'off relaxation might result in predictions with '
'distracting stereochemical violations but might help '
'in case you are having issues with the relaxation '
'stage.')
flags.DEFINE_boolean('use_gpu_relax', None, 'Whether to relax on GPU. '
'Relax on GPU can be much faster than CPU, so it is '
'recommended to enable if possible. GPUs must be available'
' if this setting is enabled.')
FLAGS = flags.FLAGS
MAX_TEMPLATE_HITS = 20
RELAX_MAX_ITERATIONS = 0
RELAX_ENERGY_TOLERANCE = 2.39
RELAX_STIFFNESS = 10.0
RELAX_EXCLUDE_RESIDUES = []
RELAX_MAX_OUTER_ITERATIONS = 3
def _check_flag(flag_name: str,
other_flag_name: str,
should_be_set: bool):
if should_be_set != bool(FLAGS[flag_name].value):
verb = 'be' if should_be_set else 'not be'
raise ValueError(f'{flag_name} must {verb} set when running with '
f'"--{other_flag_name}={FLAGS[other_flag_name].value}".')
def _jnp_to_np(output: Dict[str, Any]) -> Dict[str, Any]:
"""Recursively changes jax arrays to numpy arrays."""
for k, v in output.items():
if isinstance(v, dict):
output[k] = _jnp_to_np(v)
elif isinstance(v, jnp.ndarray):
output[k] = np.array(v)
return output
def _save_confidence_json_file(
plddt: np.ndarray, output_dir: str, model_name: str
) -> None:
confidence_json = confidence.confidence_json(plddt)
# Save the confidence json.
confidence_json_output_path = os.path.join(
output_dir, f'confidence_{model_name}.json'
)
with open(confidence_json_output_path, 'w') as f:
f.write(confidence_json)
def _save_mmcif_file(
prot: protein.Protein,
output_dir: str,
model_name: str,
file_id: str,
model_type: str,
) -> None:
"""Crate mmCIF string and save to a file.
Args:
prot: Protein object.
output_dir: Directory to which files are saved.
model_name: Name of a model.
file_id: The file ID (usually the PDB ID) to be used in the mmCIF.
model_type: Monomer or multimer.
"""
mmcif_string = protein.to_mmcif(prot, file_id, model_type)
# Save the MMCIF.
mmcif_output_path = os.path.join(output_dir, f'{model_name}.cif')
with open(mmcif_output_path, 'w') as f:
f.write(mmcif_string)
def _save_pae_json_file(
pae: np.ndarray, max_pae: float, output_dir: str, model_name: str
) -> None:
"""Check prediction result for PAE data and save to a JSON file if present.
Args:
pae: The n_res x n_res PAE array.
max_pae: The maximum possible PAE value.
output_dir: Directory to which files are saved.
model_name: Name of a model.
"""
pae_json = confidence.pae_json(pae, max_pae)
# Save the PAE json.
pae_json_output_path = os.path.join(output_dir, f'pae_{model_name}.json')
with open(pae_json_output_path, 'w') as f:
f.write(pae_json)
def predict_structure(
fasta_path: str,
fasta_name: str,
output_dir_base: str,
data_pipeline: Union[pipeline.DataPipeline, pipeline_multimer.DataPipeline],
model_runners: Dict[str, model.RunModel],
amber_relaxer: relax.AmberRelaxation,
benchmark: bool,
random_seed: int,
models_to_relax: ModelsToRelax,
model_type: str,
):
"""Predicts structure using AlphaFold for the given sequence."""
logging.info('Predicting %s', fasta_name)
timings = {}
output_dir = os.path.join(output_dir_base, fasta_name)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
msa_output_dir = os.path.join(output_dir, 'msas')
if not os.path.exists(msa_output_dir):
os.makedirs(msa_output_dir)
# Get features.
t_0 = time.time()
feature_dict = data_pipeline.process(
input_fasta_path=fasta_path,
msa_output_dir=msa_output_dir)
timings['features'] = time.time() - t_0
# Write out features as a pickled dictionary.
features_output_path = os.path.join(output_dir, 'features.pkl')
with open(features_output_path, 'wb') as f:
pickle.dump(feature_dict, f, protocol=4)
unrelaxed_pdbs = {}
unrelaxed_proteins = {}
relaxed_pdbs = {}
relax_metrics = {}
ranking_confidences = {}
# Run the models.
num_models = len(model_runners)
for model_index, (model_name, model_runner) in enumerate(
model_runners.items()):
logging.info('Running model %s on %s', model_name, fasta_name)
t_0 = time.time()
model_random_seed = model_index + random_seed * num_models
processed_feature_dict = model_runner.process_features(
feature_dict, random_seed=model_random_seed)
timings[f'process_features_{model_name}'] = time.time() - t_0
t_0 = time.time()
prediction_result = model_runner.predict(processed_feature_dict,
random_seed=model_random_seed)
t_diff = time.time() - t_0
timings[f'predict_and_compile_{model_name}'] = t_diff
logging.info(
'Total JAX model %s on %s predict time (includes compilation time, see --benchmark): %.1fs',
model_name, fasta_name, t_diff)
if benchmark:
t_0 = time.time()
model_runner.predict(processed_feature_dict,
random_seed=model_random_seed)
t_diff = time.time() - t_0
timings[f'predict_benchmark_{model_name}'] = t_diff
logging.info(
'Total JAX model %s on %s predict time (excludes compilation time): %.1fs',
model_name, fasta_name, t_diff)
plddt = prediction_result['plddt']
_save_confidence_json_file(plddt, output_dir, model_name)
ranking_confidences[model_name] = prediction_result['ranking_confidence']
if (
'predicted_aligned_error' in prediction_result
and 'max_predicted_aligned_error' in prediction_result
):
pae = prediction_result['predicted_aligned_error']
max_pae = prediction_result['max_predicted_aligned_error']
_save_pae_json_file(pae, float(max_pae), output_dir, model_name)
# Remove jax dependency from results.
np_prediction_result = _jnp_to_np(dict(prediction_result))
# Save the model outputs.
result_output_path = os.path.join(output_dir, f'result_{model_name}.pkl')
with open(result_output_path, 'wb') as f:
pickle.dump(np_prediction_result, f, protocol=4)
# Add the predicted LDDT in the b-factor column.
# Note that higher predicted LDDT value means higher model confidence.
plddt_b_factors = np.repeat(
plddt[:, None], residue_constants.atom_type_num, axis=-1)
unrelaxed_protein = protein.from_prediction(
features=processed_feature_dict,
result=prediction_result,
b_factors=plddt_b_factors,
remove_leading_feature_dimension=not model_runner.multimer_mode)
unrelaxed_proteins[model_name] = unrelaxed_protein
unrelaxed_pdbs[model_name] = protein.to_pdb(unrelaxed_protein)
unrelaxed_pdb_path = os.path.join(output_dir, f'unrelaxed_{model_name}.pdb')
with open(unrelaxed_pdb_path, 'w') as f:
f.write(unrelaxed_pdbs[model_name])
_save_mmcif_file(
prot=unrelaxed_protein,
output_dir=output_dir,
model_name=f'unrelaxed_{model_name}',
file_id=str(model_index),
model_type=model_type,
)
# Rank by model confidence.
ranked_order = [
model_name for model_name, confidence in
sorted(ranking_confidences.items(), key=lambda x: x[1], reverse=True)]
# Relax predictions.
if models_to_relax == ModelsToRelax.BEST:
to_relax = [ranked_order[0]]
elif models_to_relax == ModelsToRelax.ALL:
to_relax = ranked_order
elif models_to_relax == ModelsToRelax.NONE:
to_relax = []
for model_name in to_relax:
t_0 = time.time()
relaxed_pdb_str, _, violations = amber_relaxer.process(
prot=unrelaxed_proteins[model_name])
relax_metrics[model_name] = {
'remaining_violations': violations,
'remaining_violations_count': sum(violations)
}
timings[f'relax_{model_name}'] = time.time() - t_0
relaxed_pdbs[model_name] = relaxed_pdb_str
# Save the relaxed PDB.
relaxed_output_path = os.path.join(
output_dir, f'relaxed_{model_name}.pdb')
with open(relaxed_output_path, 'w') as f:
f.write(relaxed_pdb_str)
relaxed_protein = protein.from_pdb_string(relaxed_pdb_str)
_save_mmcif_file(
prot=relaxed_protein,
output_dir=output_dir,
model_name=f'relaxed_{model_name}',
file_id='0',
model_type=model_type,
)
# Write out relaxed PDBs in rank order.
for idx, model_name in enumerate(ranked_order):
ranked_output_path = os.path.join(output_dir, f'ranked_{idx}.pdb')
with open(ranked_output_path, 'w') as f:
if model_name in relaxed_pdbs:
f.write(relaxed_pdbs[model_name])
else:
f.write(unrelaxed_pdbs[model_name])
if model_name in relaxed_pdbs:
protein_instance = protein.from_pdb_string(relaxed_pdbs[model_name])
else:
protein_instance = protein.from_pdb_string(unrelaxed_pdbs[model_name])
_save_mmcif_file(
prot=protein_instance,
output_dir=output_dir,
model_name=f'ranked_{idx}',
file_id=str(idx),
model_type=model_type,
)
ranking_output_path = os.path.join(output_dir, 'ranking_debug.json')
with open(ranking_output_path, 'w') as f:
label = 'iptm+ptm' if 'iptm' in prediction_result else 'plddts'
f.write(json.dumps(
{label: ranking_confidences, 'order': ranked_order}, indent=4))
logging.info('Final timings for %s: %s', fasta_name, timings)
timings_output_path = os.path.join(output_dir, 'timings.json')
with open(timings_output_path, 'w') as f:
f.write(json.dumps(timings, indent=4))
if models_to_relax != ModelsToRelax.NONE:
relax_metrics_path = os.path.join(output_dir, 'relax_metrics.json')
with open(relax_metrics_path, 'w') as f:
f.write(json.dumps(relax_metrics, indent=4))
def main(argv):
if len(argv) > 1:
raise app.UsageError('Too many command-line arguments.')
for tool_name in (
'jackhmmer', 'hhblits', 'hhsearch', 'hmmsearch', 'hmmbuild', 'kalign'):
if not FLAGS[f'{tool_name}_binary_path'].value:
raise ValueError(f'Could not find path to the "{tool_name}" binary. Make '
'sure it is installed on your system.')
use_small_bfd = FLAGS.db_preset == 'reduced_dbs'
_check_flag('small_bfd_database_path', 'db_preset',
should_be_set=use_small_bfd)
_check_flag('bfd_database_path', 'db_preset',
should_be_set=not use_small_bfd)
_check_flag('uniref30_database_path', 'db_preset',
should_be_set=not use_small_bfd)
run_multimer_system = 'multimer' in FLAGS.model_preset
model_type = 'Multimer' if run_multimer_system else 'Monomer'
_check_flag('pdb70_database_path', 'model_preset',
should_be_set=not run_multimer_system)
_check_flag('pdb_seqres_database_path', 'model_preset',
should_be_set=run_multimer_system)
_check_flag('uniprot_database_path', 'model_preset',
should_be_set=run_multimer_system)
if FLAGS.model_preset == 'monomer_casp14':
num_ensemble = 8
else:
num_ensemble = 1
# Check for duplicate FASTA file names.
fasta_names = [pathlib.Path(p).stem for p in FLAGS.fasta_paths]
if len(fasta_names) != len(set(fasta_names)):
raise ValueError('All FASTA paths must have a unique basename.')
if run_multimer_system:
template_searcher = hmmsearch.Hmmsearch(
binary_path=FLAGS.hmmsearch_binary_path,
hmmbuild_binary_path=FLAGS.hmmbuild_binary_path,
database_path=FLAGS.pdb_seqres_database_path)
template_featurizer = templates.HmmsearchHitFeaturizer(
mmcif_dir=FLAGS.template_mmcif_dir,
max_template_date=FLAGS.max_template_date,
max_hits=MAX_TEMPLATE_HITS,
kalign_binary_path=FLAGS.kalign_binary_path,
release_dates_path=None,
obsolete_pdbs_path=FLAGS.obsolete_pdbs_path)
else:
template_searcher = hhsearch.HHSearch(
binary_path=FLAGS.hhsearch_binary_path,
databases=[FLAGS.pdb70_database_path])
template_featurizer = templates.HhsearchHitFeaturizer(
mmcif_dir=FLAGS.template_mmcif_dir,
max_template_date=FLAGS.max_template_date,
max_hits=MAX_TEMPLATE_HITS,
kalign_binary_path=FLAGS.kalign_binary_path,
release_dates_path=None,
obsolete_pdbs_path=FLAGS.obsolete_pdbs_path)
monomer_data_pipeline = pipeline.DataPipeline(
jackhmmer_binary_path=FLAGS.jackhmmer_binary_path,
hhblits_binary_path=FLAGS.hhblits_binary_path,
uniref90_database_path=FLAGS.uniref90_database_path,
mgnify_database_path=FLAGS.mgnify_database_path,
bfd_database_path=FLAGS.bfd_database_path,
uniref30_database_path=FLAGS.uniref30_database_path,
small_bfd_database_path=FLAGS.small_bfd_database_path,
template_searcher=template_searcher,
template_featurizer=template_featurizer,
use_small_bfd=use_small_bfd,
use_precomputed_msas=FLAGS.use_precomputed_msas)
if run_multimer_system:
num_predictions_per_model = FLAGS.num_multimer_predictions_per_model
data_pipeline = pipeline_multimer.DataPipeline(
monomer_data_pipeline=monomer_data_pipeline,
jackhmmer_binary_path=FLAGS.jackhmmer_binary_path,
uniprot_database_path=FLAGS.uniprot_database_path,
use_precomputed_msas=FLAGS.use_precomputed_msas)
else:
num_predictions_per_model = 1
data_pipeline = monomer_data_pipeline
model_runners = {}
model_names = config.MODEL_PRESETS[FLAGS.model_preset]
for model_name in model_names:
model_config = config.model_config(model_name)
if run_multimer_system:
model_config.model.num_ensemble_eval = num_ensemble
else:
model_config.data.eval.num_ensemble = num_ensemble
model_params = data.get_model_haiku_params(
model_name=model_name, data_dir=FLAGS.data_dir)
model_runner = model.RunModel(model_config, model_params)
for i in range(num_predictions_per_model):
model_runners[f'{model_name}_pred_{i}'] = model_runner
logging.info('Have %d models: %s', len(model_runners),
list(model_runners.keys()))
amber_relaxer = relax.AmberRelaxation(
max_iterations=RELAX_MAX_ITERATIONS,
tolerance=RELAX_ENERGY_TOLERANCE,
stiffness=RELAX_STIFFNESS,
exclude_residues=RELAX_EXCLUDE_RESIDUES,
max_outer_iterations=RELAX_MAX_OUTER_ITERATIONS,
use_gpu=FLAGS.use_gpu_relax)
random_seed = FLAGS.random_seed
if random_seed is None:
random_seed = random.randrange(sys.maxsize // len(model_runners))
logging.info('Using random seed %d for the data pipeline', random_seed)
# Predict structure for each of the sequences.
for i, fasta_path in enumerate(FLAGS.fasta_paths):
fasta_name = fasta_names[i]
predict_structure(
fasta_path=fasta_path,
fasta_name=fasta_name,
output_dir_base=FLAGS.output_dir,
data_pipeline=data_pipeline,
model_runners=model_runners,
amber_relaxer=amber_relaxer,
benchmark=FLAGS.benchmark,
random_seed=random_seed,
models_to_relax=FLAGS.models_to_relax,
model_type=model_type,
)
if __name__ == '__main__':
flags.mark_flags_as_required([
'fasta_paths',
'output_dir',
'data_dir',
'uniref90_database_path',
'mgnify_database_path',
'template_mmcif_dir',
'max_template_date',
'obsolete_pdbs_path',
'use_gpu_relax',
])
app.run(main)
# Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for run_alphafold."""
import json
import os
from absl.testing import absltest
from absl.testing import parameterized
import run_alphafold
import mock
import numpy as np
# Internal import (7716).
TEST_DATA_DIR = 'alphafold/common/testdata/'
class RunAlphafoldTest(parameterized.TestCase):
@parameterized.named_parameters(
('relax', run_alphafold.ModelsToRelax.ALL),
('no_relax', run_alphafold.ModelsToRelax.NONE),
)
def test_end_to_end(self, models_to_relax):
data_pipeline_mock = mock.Mock()
model_runner_mock = mock.Mock()
amber_relaxer_mock = mock.Mock()
data_pipeline_mock.process.return_value = {}
model_runner_mock.process_features.return_value = {
'aatype': np.zeros((12, 10), dtype=np.int32),
'residue_index': np.tile(np.arange(10, dtype=np.int32)[None], (12, 1)),
}
model_runner_mock.predict.return_value = {
'structure_module': {
'final_atom_positions': np.zeros((10, 37, 3)),
'final_atom_mask': np.ones((10, 37)),
},
'predicted_lddt': {
'logits': np.ones((10, 50)),
},
'plddt': np.ones(10) * 42,
'ranking_confidence': 90,
'ptm': np.array(0.),
'aligned_confidence_probs': np.zeros((10, 10, 50)),
'predicted_aligned_error': np.zeros((10, 10)),
'max_predicted_aligned_error': np.array(0.),
}
model_runner_mock.multimer_mode = False
with open(
os.path.join(
absltest.get_default_test_srcdir(), TEST_DATA_DIR, 'glucagon.pdb'
)
) as f:
pdb_string = f.read()
amber_relaxer_mock.process.return_value = (
pdb_string,
None,
[1.0, 0.0, 0.0],
)
out_dir = self.create_tempdir().full_path
fasta_path = os.path.join(out_dir, 'target.fasta')
with open(fasta_path, 'wt') as f:
f.write('>A\nAAAAAAAAAAAAA')
fasta_name = 'test'
run_alphafold.predict_structure(
fasta_path=fasta_path,
fasta_name=fasta_name,
output_dir_base=out_dir,
data_pipeline=data_pipeline_mock,
model_runners={'model1': model_runner_mock},
amber_relaxer=amber_relaxer_mock,
benchmark=False,
random_seed=0,
models_to_relax=models_to_relax,
model_type='Monomer',
)
base_output_files = os.listdir(out_dir)
self.assertIn('target.fasta', base_output_files)
self.assertIn('test', base_output_files)
target_output_files = os.listdir(os.path.join(out_dir, 'test'))
expected_files = [
'confidence_model1.json',
'features.pkl',
'msas',
'pae_model1.json',
'ranked_0.cif',
'ranked_0.pdb',
'ranking_debug.json',
'result_model1.pkl',
'timings.json',
'unrelaxed_model1.cif',
'unrelaxed_model1.pdb',
]
if models_to_relax == run_alphafold.ModelsToRelax.ALL:
expected_files.extend(
['relaxed_model1.cif', 'relaxed_model1.pdb', 'relax_metrics.json']
)
with open(os.path.join(out_dir, 'test', 'relax_metrics.json')) as f:
relax_metrics = json.loads(f.read())
self.assertDictEqual({'model1': {'remaining_violations': [1.0, 0.0, 0.0],
'remaining_violations_count': 1.0}},
relax_metrics)
self.assertCountEqual(expected_files, target_output_files)
# Check that pLDDT is set in the B-factor column.
with open(os.path.join(out_dir, 'test', 'unrelaxed_model1.pdb')) as f:
for line in f:
if line.startswith('ATOM'):
self.assertEqual(line[61:66], '42.00')
if __name__ == '__main__':
absltest.main()
#!/bin/bash
download_dir=/home/chuangkj/alphafold2_jax/downloads
python3 run_alphafold.py \
--fasta_paths=rcsb_pdb_8U23.fasta \
--output_dir=./ \
--use_precomputed_msas=false \
--data_dir=$download_dir \
--uniref90_database_path=$download_dir/uniref90/uniref90.fasta \
--mgnify_database_path=$download_dir/mgnify/mgy_clusters_2018_12.fa \
--bfd_database_path=$download_dir/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--uniref30_database_path=$download_dir/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--pdb70_database_path=$download_dir/pdb70/pdb70 \
--template_mmcif_dir=$download_dir/pdb_mmcif/mmcif_files \
--obsolete_pdbs_path=$download_dir/pdb_mmcif/obsolete.dat \
--max_template_date=2024-05-14 \
--model_preset=monomer \
--db_preset=full_dbs \
--models_to_relax=best \
--use_gpu_relax=false \
--benchmark=true
#!/bin/bash
download_dir=/home/chuangkj/alphafold2_jax/downloads
python3 run_alphafold.py \
--fasta_paths=multimer.fasta \
--output_dir=./ \
--use_precomputed_msas=false \
--data_dir=$download_dir \
--uniref90_database_path=$download_dir/uniref90/uniref90.fasta \
--mgnify_database_path=$download_dir/mgnify/mgy_clusters_2018_12.fa \
--bfd_database_path=$download_dir/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--uniref30_database_path=$download_dir/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--uniprot_database_path=$download_dir/uniprot/uniprot_trembl.fasta \
--pdb_seqres_database_path=$download_dir/pdb_seqres/pdb_seqres.txt \
--template_mmcif_dir=$download_dir/pdb_mmcif/mmcif_files \
--obsolete_pdbs_path=$download_dir/pdb_mmcif/obsolete.dat \
--max_template_date=2024-05-14 \
--model_preset=multimer \
--db_preset=full_dbs \
--models_to_relax=best \
--use_gpu_relax=false \
--benchmark=true
#!/bin/bash
#
# Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Downloads and unzips all required data for AlphaFold.
#
# Usage: bash download_all_data.sh /path/to/download/directory
set -e
if [[ $# -eq 0 ]]; then
echo "Error: download directory must be provided as an input argument."
exit 1
fi
if ! command -v aria2c &> /dev/null ; then
echo "Error: aria2c could not be found. Please install aria2c (sudo apt install aria2)."
exit 1
fi
DOWNLOAD_DIR="$1"
DOWNLOAD_MODE="${2:-full_dbs}" # Default mode to full_dbs.
if [[ "${DOWNLOAD_MODE}" != full_dbs && "${DOWNLOAD_MODE}" != reduced_dbs ]]
then
echo "DOWNLOAD_MODE ${DOWNLOAD_MODE} not recognized."
exit 1
fi
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
echo "Downloading AlphaFold parameters..."
bash "${SCRIPT_DIR}/download_alphafold_params.sh" "${DOWNLOAD_DIR}"
if [[ "${DOWNLOAD_MODE}" = reduced_dbs ]] ; then
echo "Downloading Small BFD..."
bash "${SCRIPT_DIR}/download_small_bfd.sh" "${DOWNLOAD_DIR}"
else
echo "Downloading BFD..."
bash "${SCRIPT_DIR}/download_bfd.sh" "${DOWNLOAD_DIR}"
fi
echo "Downloading MGnify..."
bash "${SCRIPT_DIR}/download_mgnify.sh" "${DOWNLOAD_DIR}"
echo "Downloading PDB70..."
bash "${SCRIPT_DIR}/download_pdb70.sh" "${DOWNLOAD_DIR}"
echo "Downloading PDB mmCIF files..."
bash "${SCRIPT_DIR}/download_pdb_mmcif.sh" "${DOWNLOAD_DIR}"
echo "Downloading Uniref30..."
bash "${SCRIPT_DIR}/download_uniref30.sh" "${DOWNLOAD_DIR}"
echo "Downloading Uniref90..."
bash "${SCRIPT_DIR}/download_uniref90.sh" "${DOWNLOAD_DIR}"
echo "Downloading UniProt..."
bash "${SCRIPT_DIR}/download_uniprot.sh" "${DOWNLOAD_DIR}"
echo "Downloading PDB SeqRes..."
bash "${SCRIPT_DIR}/download_pdb_seqres.sh" "${DOWNLOAD_DIR}"
echo "All data downloaded."
#!/bin/bash
#
# Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Downloads and unzips the AlphaFold parameters.
#
# Usage: bash download_alphafold_params.sh /path/to/download/directory
set -e
if [[ $# -eq 0 ]]; then
echo "Error: download directory must be provided as an input argument."
exit 1
fi
if ! command -v aria2c &> /dev/null ; then
echo "Error: aria2c could not be found. Please install aria2c (sudo apt install aria2)."
exit 1
fi
DOWNLOAD_DIR="$1"
ROOT_DIR="${DOWNLOAD_DIR}/params"
SOURCE_URL="https://storage.googleapis.com/alphafold/alphafold_params_2022-12-06.tar"
BASENAME=$(basename "${SOURCE_URL}")
mkdir --parents "${ROOT_DIR}"
aria2c "${SOURCE_URL}" --dir="${ROOT_DIR}"
tar --extract --verbose --file="${ROOT_DIR}/${BASENAME}" \
--directory="${ROOT_DIR}" --preserve-permissions
rm "${ROOT_DIR}/${BASENAME}"
#!/bin/bash
#
# Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Downloads and unzips the BFD database for AlphaFold.
#
# Usage: bash download_bfd.sh /path/to/download/directory
set -e
if [[ $# -eq 0 ]]; then
echo "Error: download directory must be provided as an input argument."
exit 1
fi
if ! command -v aria2c &> /dev/null ; then
echo "Error: aria2c could not be found. Please install aria2c (sudo apt install aria2)."
exit 1
fi
DOWNLOAD_DIR="$1"
ROOT_DIR="${DOWNLOAD_DIR}/bfd"
# Mirror of:
# https://bfd.mmseqs.com/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt.tar.gz.
SOURCE_URL="https://storage.googleapis.com/alphafold-databases/casp14_versions/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt.tar.gz"
BASENAME=$(basename "${SOURCE_URL}")
mkdir --parents "${ROOT_DIR}"
aria2c "${SOURCE_URL}" --dir="${ROOT_DIR}"
tar --extract --verbose --file="${ROOT_DIR}/${BASENAME}" \
--directory="${ROOT_DIR}"
rm "${ROOT_DIR}/${BASENAME}"
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment