AlphaFold.ipynb 47.5 KB
Newer Older
1
{
2
3
4
5
6
7
8
9
10
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pc5-mbsX9PZC"
      },
      "source": [
        "# AlphaFold Colab\n",
        "\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
11
        "This Colab notebook allows you to easily predict the structure of a protein using a slightly simplified version of [AlphaFold v2.3.1](https://doi.org/10.1038/s41586-021-03819-2). \n",
12
        "\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
13
        "**Differences to AlphaFold v2.3.1**\n",
14
        "\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
15
        "In comparison to AlphaFold v2.3.1, this Colab notebook uses **no templates (homologous structures)** and a selected portion of the [BFD database](https://bfd.mmseqs.com/). We have validated these changes on several thousand recent PDB structures. While accuracy will be near-identical to the full AlphaFold system on many targets, a small fraction have a large drop in accuracy due to the smaller MSA and lack of templates. For best reliability, we recommend instead using the [full open source AlphaFold](https://github.com/deepmind/alphafold/), or the [AlphaFold Protein Structure Database](https://alphafold.ebi.ac.uk/).\n",
16
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
17
        "**This Colab has a small drop in average accuracy for multimers compared to local AlphaFold installation, for full multimer accuracy it is highly recommended to run [AlphaFold locally](https://github.com/deepmind/alphafold#running-alphafold).** Moreover, the AlphaFold-Multimer requires searching for MSA for every unique sequence in the complex, hence it is substantially slower. If your notebook times-out due to slow multimer MSA search, we recommend either using Colab Pro or running AlphaFold locally.\n",
18
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
19
        "Please note that this Colab notebook is provided for theoretical modelling only and caution should be exercised in its use. \n",
20
        "\n",
21
22
        "The **PAE file format** has been updated to match AFDB. Please see the [AFDB FAQ](https://alphafold.ebi.ac.uk/faq/#faq-7) for a description of the new format.\n",
        "\n",
23
24
25
26
27
28
        "**Citing this work**\n",
        "\n",
        "Any publication that discloses findings arising from using this notebook should [cite](https://github.com/deepmind/alphafold/#citing-this-work) the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2).\n",
        "\n",
        "**Licenses**\n",
        "\n",
29
        "This Colab uses the [AlphaFold model parameters](https://github.com/deepmind/alphafold/#model-parameters-license) which are subject to the Creative Commons Attribution 4.0 International ([CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode)) license. The Colab itself is provided under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). See the full license statement below.\n",
30
31
32
33
34
35
36
37
38
        "\n",
        "**More information**\n",
        "\n",
        "You can find more information about how AlphaFold works in the following papers:\n",
        "\n",
        "*   [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2)\n",
        "*   [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1)\n",
        "*   [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1)\n",
        "\n",
39
40
41
42
43
44
45
        "FAQ on how to interpret AlphaFold predictions are [here](https://alphafold.ebi.ac.uk/faq).\n",
        "\n",
        "If you have any questions not covered in the FAQ, please contact the AlphaFold team at alphafold@deepmind.com.\n",
        "\n",
        "**Get in touch**\n",
        "\n",
        "We would love to hear your feedback and understand how AlphaFold has been useful in your research. Share your stories with us at alphafold@deepmind.com.\n"
46
47
      ]
    },
Augustin Zidek's avatar
Augustin Zidek committed
48
49
50
51
52
53
54
55
56
57
58
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uC1dKAwk2eyl"
      },
      "source": [
        "## Setup\n",
        "\n",
        "Start by running the 2 cells below to set up AlphaFold and all required software."
      ]
    },
59
60
61
62
63
64
65
66
67
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "woIxeCPygt7K"
      },
      "outputs": [],
      "source": [
68
69
70
71
72
        "# Set environment variables before running any other code.\n",
        "import os\n",
        "os.environ['TF_FORCE_UNIFIED_MEMORY'] = '1'\n",
        "os.environ['XLA_PYTHON_CLIENT_MEM_FRACTION'] = '4.0'\n",
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
73
        "#@title 1. Install third-party software\n",
74
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
75
76
        "#@markdown Please execute this cell by pressing the _Play_ button\n",
        "#@markdown on the left to download and import third-party software\n",
77
78
        "#@markdown in this Colab notebook. (See the [acknowledgements](https://github.com/deepmind/alphafold/#acknowledgements) in our readme.)\n",
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
79
        "#@markdown **Note**: This installs the software on the Colab\n",
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
        "#@markdown notebook in the cloud and not on your computer.\n",
        "\n",
        "from IPython.utils import io\n",
        "import os\n",
        "import subprocess\n",
        "import tqdm.notebook\n",
        "\n",
        "TQDM_BAR_FORMAT = '{l_bar}{bar}| {n_fmt}/{total_fmt} [elapsed: {elapsed} remaining: {remaining}]'\n",
        "\n",
        "try:\n",
        "  with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
        "    with io.capture_output() as captured:\n",
        "      # Uninstall default Colab version of TF.\n",
        "      %shell pip uninstall -y tensorflow\n",
        "\n",
        "      %shell sudo apt install --quiet --yes hmmer\n",
        "      pbar.update(6)\n",
        "\n",
        "      # Install py3dmol.\n",
        "      %shell pip install py3dmol\n",
        "      pbar.update(2)\n",
        "\n",
        "      # Install OpenMM and pdbfixer.\n",
        "      %shell rm -rf /opt/conda\n",
        "      %shell wget -q -P /tmp \\\n",
        "        https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \\\n",
        "          \u0026\u0026 bash /tmp/Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \\\n",
        "          \u0026\u0026 rm /tmp/Miniconda3-latest-Linux-x86_64.sh\n",
        "      pbar.update(9)\n",
        "\n",
        "      PATH=%env PATH\n",
        "      %env PATH=/opt/conda/bin:{PATH}\n",
Augustin Zidek's avatar
Augustin Zidek committed
112
        "      %shell conda install -qy conda==4.13.0 \\\n",
113
        "          \u0026\u0026 conda install -qy -c conda-forge \\\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
114
        "            python=3.8 \\\n",
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
        "            openmm=7.5.1 \\\n",
        "            pdbfixer\n",
        "      pbar.update(80)\n",
        "\n",
        "      # Create a ramdisk to store a database chunk to make Jackhmmer run fast.\n",
        "      %shell sudo mkdir -m 777 --parents /tmp/ramdisk\n",
        "      %shell sudo mount -t tmpfs -o size=9G ramdisk /tmp/ramdisk\n",
        "      pbar.update(2)\n",
        "\n",
        "      %shell wget -q -P /content \\\n",
        "        https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt\n",
        "      pbar.update(1)\n",
        "except subprocess.CalledProcessError:\n",
        "  print(captured)\n",
        "  raise"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "VzJ5iMjTtoZw"
      },
      "outputs": [],
      "source": [
Augustin Zidek's avatar
Augustin Zidek committed
141
        "#@title 2. Download AlphaFold\n",
142
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
143
        "#@markdown Please execute this cell by pressing the *Play* button on\n",
144
145
146
        "#@markdown the left.\n",
        "\n",
        "GIT_REPO = 'https://github.com/deepmind/alphafold'\n",
Augustin Zidek's avatar
Augustin Zidek committed
147
        "SOURCE_URL = 'https://storage.googleapis.com/alphafold/alphafold_params_colab_2022-12-06.tar'\n",
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
        "PARAMS_DIR = './alphafold/data/params'\n",
        "PARAMS_PATH = os.path.join(PARAMS_DIR, os.path.basename(SOURCE_URL))\n",
        "\n",
        "try:\n",
        "  with tqdm.notebook.tqdm(total=100, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
        "    with io.capture_output() as captured:\n",
        "      %shell rm -rf alphafold\n",
        "      %shell git clone --branch main {GIT_REPO} alphafold\n",
        "      pbar.update(8)\n",
        "      # Install the required versions of all dependencies.\n",
        "      %shell pip3 install -r ./alphafold/requirements.txt\n",
        "      # Run setup.py to install only AlphaFold.\n",
        "      %shell pip3 install --no-dependencies ./alphafold\n",
        "      pbar.update(10)\n",
        "\n",
        "      # Apply OpenMM patch.\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
164
        "      %shell pushd /opt/conda/lib/python3.8/site-packages/ \u0026\u0026 \\\n",
165
166
167
168
169
170
        "          patch -p0 \u003c /content/alphafold/docker/openmm.patch \u0026\u0026 \\\n",
        "          popd\n",
        "\n",
        "      # Make sure stereo_chemical_props.txt is in all locations where it could be searched for.\n",
        "      %shell mkdir -p /content/alphafold/alphafold/common\n",
        "      %shell cp -f /content/stereo_chemical_props.txt /content/alphafold/alphafold/common\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
171
172
        "      %shell mkdir -p /opt/conda/lib/python3.8/site-packages/alphafold/common/\n",
        "      %shell cp -f /content/stereo_chemical_props.txt /opt/conda/lib/python3.8/site-packages/alphafold/common/\n",
173
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
174
        "      # Load parameters\n",
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
        "      %shell mkdir --parents \"{PARAMS_DIR}\"\n",
        "      %shell wget -O \"{PARAMS_PATH}\" \"{SOURCE_URL}\"\n",
        "      pbar.update(27)\n",
        "\n",
        "      %shell tar --extract --verbose --file=\"{PARAMS_PATH}\" \\\n",
        "        --directory=\"{PARAMS_DIR}\" --preserve-permissions\n",
        "      %shell rm \"{PARAMS_PATH}\"\n",
        "      pbar.update(55)\n",
        "except subprocess.CalledProcessError:\n",
        "  print(captured)\n",
        "  raise\n",
        "\n",
        "import jax\n",
        "if jax.local_devices()[0].platform == 'tpu':\n",
        "  raise RuntimeError('Colab TPU runtime not supported. Change it to GPU via Runtime -\u003e Change Runtime Type -\u003e Hardware accelerator -\u003e GPU.')\n",
        "elif jax.local_devices()[0].platform == 'cpu':\n",
        "  raise RuntimeError('Colab CPU runtime not supported. Change it to GPU via Runtime -\u003e Change Runtime Type -\u003e Hardware accelerator -\u003e GPU.')\n",
        "else:\n",
        "  print(f'Running with {jax.local_devices()[0].device_kind} GPU')\n",
        "\n",
        "# Make sure everything we need is on the path.\n",
        "import sys\n",
Hamish Tomlinson's avatar
Hamish Tomlinson committed
197
        "sys.path.append('/opt/conda/lib/python3.8/site-packages')\n",
198
        "sys.path.append('/content/alphafold')"
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W4JpOs6oA-QS"
      },
      "source": [
        "## Making a prediction\n",
        "\n",
        "Please paste the sequence of your protein in the text box below, then run the remaining cells via _Runtime_ \u003e _Run after_. You can also run the cells individually by pressing the _Play_ button on the left.\n",
        "\n",
        "Note that the search against databases and the actual prediction can take some time, from minutes to hours, depending on the length of the protein and what type of GPU you are allocated by Colab (see FAQ below)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "rowN0bVYLe9n"
      },
      "outputs": [],
      "source": [
Augustin Zidek's avatar
Augustin Zidek committed
223
        "#@title 3. Enter the amino acid sequence(s) to fold ⬇️\n",
224
        "#@markdown Enter the amino acid sequence(s) to fold:\n",
Augustin Zidek's avatar
Augustin Zidek committed
225
226
        "#@markdown * If you enter only a single sequence, the monomer model will be \n",
        "#@markdown used (unless you override this below).\n",
227
228
229
        "#@markdown * If you enter multiple sequences, the multimer model will be used.\n",
        "\n",
        "from alphafold.notebooks import notebook_utils\n",
Augustin Zidek's avatar
Augustin Zidek committed
230
231
232
233
234
235
        "import enum\n",
        "\n",
        "@enum.unique\n",
        "class ModelType(enum.Enum):\n",
        "  MONOMER = 0\n",
        "  MULTIMER = 1\n",
236
237
238
239
240
241
242
243
244
        "\n",
        "sequence_1 = 'MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH'  #@param {type:\"string\"}\n",
        "sequence_2 = ''  #@param {type:\"string\"}\n",
        "sequence_3 = ''  #@param {type:\"string\"}\n",
        "sequence_4 = ''  #@param {type:\"string\"}\n",
        "sequence_5 = ''  #@param {type:\"string\"}\n",
        "sequence_6 = ''  #@param {type:\"string\"}\n",
        "sequence_7 = ''  #@param {type:\"string\"}\n",
        "sequence_8 = ''  #@param {type:\"string\"}\n",
Augustin Zidek's avatar
Augustin Zidek committed
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
        "sequence_9 = ''  #@param {type:\"string\"}\n",
        "sequence_10 = ''  #@param {type:\"string\"}\n",
        "sequence_11 = ''  #@param {type:\"string\"}\n",
        "sequence_12 = ''  #@param {type:\"string\"}\n",
        "sequence_13 = ''  #@param {type:\"string\"}\n",
        "sequence_14 = ''  #@param {type:\"string\"}\n",
        "sequence_15 = ''  #@param {type:\"string\"}\n",
        "sequence_16 = ''  #@param {type:\"string\"}\n",
        "sequence_17 = ''  #@param {type:\"string\"}\n",
        "sequence_18 = ''  #@param {type:\"string\"}\n",
        "sequence_19 = ''  #@param {type:\"string\"}\n",
        "sequence_20 = ''  #@param {type:\"string\"}\n",
        "\n",
        "input_sequences = (\n",
        "    sequence_1, sequence_2, sequence_3, sequence_4, sequence_5, \n",
        "    sequence_6, sequence_7, sequence_8, sequence_9, sequence_10,\n",
        "    sequence_11, sequence_12, sequence_13, sequence_14, sequence_15, \n",
        "    sequence_16, sequence_17, sequence_18, sequence_19, sequence_20)\n",
        "\n",
        "MIN_PER_SEQUENCE_LENGTH = 16\n",
265
        "MAX_PER_SEQUENCE_LENGTH = 4000\n",
Augustin Zidek's avatar
Augustin Zidek committed
266
        "MAX_MONOMER_MODEL_LENGTH = 2500\n",
267
        "MAX_LENGTH = 4000\n",
Augustin Zidek's avatar
Augustin Zidek committed
268
269
270
271
272
273
274
275
276
        "MAX_VALIDATED_LENGTH = 3000\n",
        "\n",
        "#@markdown Select this checkbox to run the multimer model for a single sequence.\n",
        "#@markdown For proteins that are monomeric in their native form, or for very \n",
        "#@markdown large single chains you may get better accuracy and memory efficiency\n",
        "#@markdown by using the multimer model.\n",
        "#@markdown \n",
        "#@markdown \n",
        "#@markdown Due to improved memory efficiency the multimer model has a maximum\n",
277
        "#@markdown limit of 4000 residues, while the monomer model has a limit of 2500\n",
Augustin Zidek's avatar
Augustin Zidek committed
278
279
280
281
282
283
        "#@markdown residues.\n",
        "\n",
        "use_multimer_model_for_monomers = False #@param {type:\"boolean\"}\n",
        "\n",
        "# Validate the input sequences.\n",
        "sequences = notebook_utils.clean_and_validate_input_sequences(\n",
284
        "    input_sequences=input_sequences,\n",
Augustin Zidek's avatar
Augustin Zidek committed
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
        "    min_sequence_length=MIN_PER_SEQUENCE_LENGTH,\n",
        "    max_sequence_length=MAX_PER_SEQUENCE_LENGTH)\n",
        "\n",
        "if len(sequences) == 1:\n",
        "  if use_multimer_model_for_monomers:\n",
        "    print('Using the multimer model for single-chain, as requested.')\n",
        "    model_type_to_use = ModelType.MULTIMER\n",
        "  else:\n",
        "    print('Using the single-chain model.')\n",
        "    model_type_to_use = ModelType.MONOMER\n",
        "else:\n",
        "  print(f'Using the multimer model with {len(sequences)} sequences.')\n",
        "  model_type_to_use = ModelType.MULTIMER\n",
        "\n",
        "# Check whether total length exceeds limit.\n",
        "total_sequence_length = sum([len(seq) for seq in sequences])\n",
        "if total_sequence_length \u003e MAX_LENGTH:\n",
        "  raise ValueError('The total sequence length is too long: '\n",
        "                   f'{total_sequence_length}, while the maximum is '\n",
        "                   f'{MAX_LENGTH}.')\n",
        "\n",
        "# Check whether we exceed the monomer limit.\n",
        "if model_type_to_use == ModelType.MONOMER:\n",
        "  if len(sequences[0]) \u003e MAX_MONOMER_MODEL_LENGTH:\n",
        "    raise ValueError(\n",
        "        f'Input sequence is too long: {len(sequences[0])} amino acids, while '\n",
        "        f'the maximum for the monomer model is {MAX_MONOMER_MODEL_LENGTH}. You may '\n",
        "        'be able to run this sequence with the multimer model by selecting the '\n",
        "        'use_multimer_model_for_monomers checkbox above.')\n",
        "    \n",
        "if total_sequence_length \u003e MAX_VALIDATED_LENGTH:\n",
        "  print('WARNING: The accuracy of the system has not been fully validated '\n",
        "        'above 3000 residues, and you may experience long running times or '\n",
        "        f'run out of memory. Total sequence length is {total_sequence_length} '\n",
        "        'residues.')\n"
320
321
322
323
324
325
326
327
328
329
330
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "2tTeTTsLKPjB"
      },
      "outputs": [],
      "source": [
Augustin Zidek's avatar
Augustin Zidek committed
331
        "#@title 4. Search against genetic databases\n",
332
333
        "\n",
        "#@markdown Once this cell has been executed, you will see\n",
Augustin Zidek's avatar
Augustin Zidek committed
334
335
336
        "#@markdown statistics about the multiple sequence alignment\n",
        "#@markdown (MSA) that will be used by AlphaFold. In particular,\n",
        "#@markdown you’ll see how well each residue is covered by similar\n",
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
        "#@markdown sequences in the MSA.\n",
        "\n",
        "# --- Python imports ---\n",
        "import collections\n",
        "import copy\n",
        "from concurrent import futures\n",
        "import json\n",
        "import random\n",
        "\n",
        "from urllib import request\n",
        "from google.colab import files\n",
        "from matplotlib import gridspec\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import py3Dmol\n",
        "\n",
        "from alphafold.model import model\n",
        "from alphafold.model import config\n",
        "from alphafold.model import data\n",
        "\n",
        "from alphafold.data import feature_processing\n",
        "from alphafold.data import msa_pairing\n",
        "from alphafold.data import pipeline\n",
        "from alphafold.data import pipeline_multimer\n",
        "from alphafold.data.tools import jackhmmer\n",
        "\n",
        "from alphafold.common import protein\n",
        "\n",
        "from alphafold.relax import relax\n",
        "from alphafold.relax import utils\n",
        "\n",
        "from IPython import display\n",
        "from ipywidgets import GridspecLayout\n",
        "from ipywidgets import Output\n",
        "\n",
        "# Color bands for visualizing plddt\n",
        "PLDDT_BANDS = [(0, 50, '#FF7D45'),\n",
        "               (50, 70, '#FFDB13'),\n",
        "               (70, 90, '#65CBF3'),\n",
        "               (90, 100, '#0053D6')]\n",
        "\n",
        "# --- Find the closest source ---\n",
Augustin Zidek's avatar
Augustin Zidek committed
379
        "test_url_pattern = 'https://storage.googleapis.com/alphafold-colab{:s}/latest/uniref90_2022_01.fasta.1'\n",
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
        "ex = futures.ThreadPoolExecutor(3)\n",
        "def fetch(source):\n",
        "  request.urlretrieve(test_url_pattern.format(source))\n",
        "  return source\n",
        "fs = [ex.submit(fetch, source) for source in ['', '-europe', '-asia']]\n",
        "source = None\n",
        "for f in futures.as_completed(fs):\n",
        "  source = f.result()\n",
        "  ex.shutdown()\n",
        "  break\n",
        "\n",
        "JACKHMMER_BINARY_PATH = '/usr/bin/jackhmmer'\n",
        "DB_ROOT_PATH = f'https://storage.googleapis.com/alphafold-colab{source}/latest/'\n",
        "# The z_value is the number of sequences in a database.\n",
        "MSA_DATABASES = [\n",
        "    {'db_name': 'uniref90',\n",
Augustin Zidek's avatar
Augustin Zidek committed
396
397
398
        "     'db_path': f'{DB_ROOT_PATH}uniref90_2022_01.fasta',\n",
        "     'num_streamed_chunks': 62,\n",
        "     'z_value': 144_113_457},\n",
399
400
401
402
403
        "    {'db_name': 'smallbfd',\n",
        "     'db_path': f'{DB_ROOT_PATH}bfd-first_non_consensus_sequences.fasta',\n",
        "     'num_streamed_chunks': 17,\n",
        "     'z_value': 65_984_053},\n",
        "    {'db_name': 'mgnify',\n",
Augustin Zidek's avatar
Augustin Zidek committed
404
405
406
        "     'db_path': f'{DB_ROOT_PATH}mgy_clusters_2022_05.fasta',\n",
        "     'num_streamed_chunks': 120,\n",
        "     'z_value': 623_796_864},\n",
407
408
409
        "]\n",
        "\n",
        "# Search UniProt and construct the all_seq features only for heteromers, not homomers.\n",
Augustin Zidek's avatar
Augustin Zidek committed
410
        "if model_type_to_use == ModelType.MULTIMER and len(set(sequences)) \u003e 1:\n",
411
412
413
        "  MSA_DATABASES.extend([\n",
        "      # Swiss-Prot and TrEMBL are concatenated together as UniProt.\n",
        "      {'db_name': 'uniprot',\n",
Augustin Zidek's avatar
Augustin Zidek committed
414
415
416
        "       'db_path': f'{DB_ROOT_PATH}uniprot_2021_04.fasta',\n",
        "       'num_streamed_chunks': 101,\n",
        "       'z_value': 225_013_025 + 565_928},\n",
417
418
419
420
421
422
423
424
425
426
427
428
        "  ])\n",
        "\n",
        "TOTAL_JACKHMMER_CHUNKS = sum([cfg['num_streamed_chunks'] for cfg in MSA_DATABASES])\n",
        "\n",
        "MAX_HITS = {\n",
        "    'uniref90': 10_000,\n",
        "    'smallbfd': 5_000,\n",
        "    'mgnify': 501,\n",
        "    'uniprot': 50_000,\n",
        "}\n",
        "\n",
        "\n",
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
        "def get_msa(sequences):\n",
        "  \"\"\"Searches for MSA for given sequences using chunked Jackhmmer search.\n",
        "  \n",
        "  Args:\n",
        "    sequences: A list of sequences to search against all databases.\n",
        "\n",
        "  Returns:\n",
        "    A dictionary mapping unique sequences to dicionaries mapping each database\n",
        "    to a list of  results, one for each chunk of the database.\n",
        "  \"\"\"\n",
        "  sequence_to_fasta_path = {}\n",
        "  # Deduplicate to not do redundant work for multiple copies of the same chain in homomers.\n",
        "  for sequence_index, sequence in enumerate(sorted(set(sequences)), 1):\n",
        "    fasta_path = f'target_{sequence_index:02d}.fasta'\n",
        "    with open(fasta_path, 'wt') as f:\n",
        "      f.write(f'\u003equery\\n{sequence}')\n",
        "    sequence_to_fasta_path[sequence] = fasta_path\n",
446
447
448
        "\n",
        "  # Run the search against chunks of genetic databases (since the genetic\n",
        "  # databases don't fit in Colab disk).\n",
449
450
        "  raw_msa_results = {sequence: {} for sequence in sequence_to_fasta_path.keys()}\n",
        "  print('\\nGetting MSA for all sequences')\n",
451
452
453
454
455
456
457
458
459
460
461
462
463
464
        "  with tqdm.notebook.tqdm(total=TOTAL_JACKHMMER_CHUNKS, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
        "    def jackhmmer_chunk_callback(i):\n",
        "      pbar.update(n=1)\n",
        "\n",
        "    for db_config in MSA_DATABASES:\n",
        "      db_name = db_config['db_name']\n",
        "      pbar.set_description(f'Searching {db_name}')\n",
        "      jackhmmer_runner = jackhmmer.Jackhmmer(\n",
        "          binary_path=JACKHMMER_BINARY_PATH,\n",
        "          database_path=db_config['db_path'],\n",
        "          get_tblout=True,\n",
        "          num_streamed_chunks=db_config['num_streamed_chunks'],\n",
        "          streaming_callback=jackhmmer_chunk_callback,\n",
        "          z_value=db_config['z_value'])\n",
465
466
467
468
469
        "      # Query all unique sequences against each chunk of the database to prevent\n",
        "      # redunantly fetching each chunk for each unique sequence.\n",
        "      results = jackhmmer_runner.query_multiple(list(sequence_to_fasta_path.values()))\n",
        "      for sequence, result_for_sequence in zip(sequence_to_fasta_path.keys(), results):\n",
        "        raw_msa_results[sequence][db_name] = result_for_sequence\n",
470
471
472
473
474
        "\n",
        "  return raw_msa_results\n",
        "\n",
        "\n",
        "features_for_chain = {}\n",
475
        "raw_msa_results_for_sequence = get_msa(sequences)\n",
476
        "for sequence_index, sequence in enumerate(sequences, start=1):\n",
477
        "  raw_msa_results = copy.deepcopy(raw_msa_results_for_sequence[sequence])\n",
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
        "\n",
        "  # Extract the MSAs from the Stockholm files.\n",
        "  # NB: deduplication happens later in pipeline.make_msa_features.\n",
        "  single_chain_msas = []\n",
        "  uniprot_msa = None\n",
        "  for db_name, db_results in raw_msa_results.items():\n",
        "    merged_msa = notebook_utils.merge_chunked_msa(\n",
        "        results=db_results, max_hits=MAX_HITS.get(db_name))\n",
        "    if merged_msa.sequences and db_name != 'uniprot':\n",
        "      single_chain_msas.append(merged_msa)\n",
        "      msa_size = len(set(merged_msa.sequences))\n",
        "      print(f'{msa_size} unique sequences found in {db_name} for sequence {sequence_index}')\n",
        "    elif merged_msa.sequences and db_name == 'uniprot':\n",
        "      uniprot_msa = merged_msa\n",
        "\n",
        "  notebook_utils.show_msa_info(single_chain_msas=single_chain_msas, sequence_index=sequence_index)\n",
        "\n",
        "  # Turn the raw data into model features.\n",
        "  feature_dict = {}\n",
        "  feature_dict.update(pipeline.make_sequence_features(\n",
        "      sequence=sequence, description='query', num_res=len(sequence)))\n",
        "  feature_dict.update(pipeline.make_msa_features(msas=single_chain_msas))\n",
        "  # We don't use templates in AlphaFold Colab notebook, add only empty placeholder features.\n",
        "  feature_dict.update(notebook_utils.empty_placeholder_template_features(\n",
        "      num_templates=0, num_res=len(sequence)))\n",
        "\n",
        "  # Construct the all_seq features only for heteromers, not homomers.\n",
Augustin Zidek's avatar
Augustin Zidek committed
505
        "  if model_type_to_use == ModelType.MULTIMER and len(set(sequences)) \u003e 1:\n",
506
507
508
509
510
511
512
513
514
515
516
517
        "    valid_feats = msa_pairing.MSA_FEATURES + (\n",
        "        'msa_species_identifiers',\n",
        "    )\n",
        "    all_seq_features = {\n",
        "        f'{k}_all_seq': v for k, v in pipeline.make_msa_features([uniprot_msa]).items()\n",
        "        if k in valid_feats}\n",
        "    feature_dict.update(all_seq_features)\n",
        "\n",
        "  features_for_chain[protein.PDB_CHAIN_IDS[sequence_index - 1]] = feature_dict\n",
        "\n",
        "\n",
        "# Do further feature post-processing depending on the model type.\n",
Augustin Zidek's avatar
Augustin Zidek committed
518
        "if model_type_to_use == ModelType.MONOMER:\n",
519
520
        "  np_example = features_for_chain[protein.PDB_CHAIN_IDS[0]]\n",
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
521
        "elif model_type_to_use == ModelType.MULTIMER:\n",
522
523
524
525
526
527
528
529
        "  all_chain_features = {}\n",
        "  for chain_id, chain_features in features_for_chain.items():\n",
        "    all_chain_features[chain_id] = pipeline_multimer.convert_monomer_features(\n",
        "        chain_features, chain_id)\n",
        "\n",
        "  all_chain_features = pipeline_multimer.add_assembly_features(all_chain_features)\n",
        "\n",
        "  np_example = feature_processing.pair_and_merge(\n",
530
        "      all_chain_features=all_chain_features)\n",
531
532
533
534
535
536
537
538
539
540
541
542
543
544
        "\n",
        "  # Pad MSA to avoid zero-sized extra_msa.\n",
        "  np_example = pipeline_multimer.pad_msa(np_example, min_num_seq=512)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "XUo6foMQxwS2"
      },
      "outputs": [],
      "source": [
Augustin Zidek's avatar
Augustin Zidek committed
545
        "#@title 5. Run AlphaFold and download prediction\n",
546
547
548
549
550
551
552
553
554
555
556
        "\n",
        "#@markdown Once this cell has been executed, a zip-archive with\n",
        "#@markdown the obtained prediction will be automatically downloaded\n",
        "#@markdown to your computer.\n",
        "\n",
        "#@markdown In case you are having issues with the relaxation stage, you can disable it below.\n",
        "#@markdown Warning: This means that the prediction might have distracting\n",
        "#@markdown small stereochemical violations.\n",
        "\n",
        "run_relax = True  #@param {type:\"boolean\"}\n",
        "\n",
557
558
559
560
561
562
        "#@markdown Relaxation is faster with a GPU, but we have found it to be less stable.\n",
        "#@markdown You may wish to enable GPU for higher performance, but if it doesn't\n",
        "#@markdown converge we suggested reverting to using without GPU.\n",
        "\n",
        "relax_use_gpu = False  #@param {type:\"boolean\"}\n",
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
563
564
565
566
567
568
569
570
        "\n",
        "#@markdown The multimer model will continue recycling until the predictions stop\n",
        "#@markdown changing, up to the limit set here. For higher accuracy, at the \n",
        "#@markdown potential cost of longer inference times, set this to 20.\n",
        "\n",
        "multimer_model_max_num_recycles = 3  #@param {type:\"integer\"}\n",
        "\n",
        "\n",
571
        "# --- Run the model ---\n",
Augustin Zidek's avatar
Augustin Zidek committed
572
        "if model_type_to_use == ModelType.MONOMER:\n",
573
        "  model_names = config.MODEL_PRESETS['monomer'] + ('model_2_ptm',)\n",
Augustin Zidek's avatar
Augustin Zidek committed
574
        "elif model_type_to_use == ModelType.MULTIMER:\n",
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
        "  model_names = config.MODEL_PRESETS['multimer']\n",
        "\n",
        "output_dir = 'prediction'\n",
        "os.makedirs(output_dir, exist_ok=True)\n",
        "\n",
        "plddts = {}\n",
        "ranking_confidences = {}\n",
        "pae_outputs = {}\n",
        "unrelaxed_proteins = {}\n",
        "\n",
        "with tqdm.notebook.tqdm(total=len(model_names) + 1, bar_format=TQDM_BAR_FORMAT) as pbar:\n",
        "  for model_name in model_names:\n",
        "    pbar.set_description(f'Running {model_name}')\n",
        "\n",
        "    cfg = config.model_config(model_name)\n",
Augustin Zidek's avatar
Augustin Zidek committed
590
591
        "\n",
        "    if model_type_to_use == ModelType.MONOMER:\n",
592
        "      cfg.data.eval.num_ensemble = 1\n",
Augustin Zidek's avatar
Augustin Zidek committed
593
        "    elif model_type_to_use == ModelType.MULTIMER:\n",
594
        "      cfg.model.num_ensemble_eval = 1\n",
Augustin Zidek's avatar
Augustin Zidek committed
595
596
597
598
599
        "\n",
        "    if model_type_to_use == ModelType.MULTIMER:\n",
        "      cfg.model.num_recycle = multimer_model_max_num_recycles\n",
        "      cfg.model.recycle_early_stop_tolerance = 0.5\n",
        "\n",
600
601
602
603
604
605
606
        "    params = data.get_model_haiku_params(model_name, './alphafold/data')\n",
        "    model_runner = model.RunModel(cfg, params)\n",
        "    processed_feature_dict = model_runner.process_features(np_example, random_seed=0)\n",
        "    prediction = model_runner.predict(processed_feature_dict, random_seed=random.randrange(sys.maxsize))\n",
        "\n",
        "    mean_plddt = prediction['plddt'].mean()\n",
        "\n",
Augustin Zidek's avatar
Augustin Zidek committed
607
        "    if model_type_to_use == ModelType.MONOMER:\n",
608
609
610
611
612
613
614
615
        "      if 'predicted_aligned_error' in prediction:\n",
        "        pae_outputs[model_name] = (prediction['predicted_aligned_error'],\n",
        "                                   prediction['max_predicted_aligned_error'])\n",
        "      else:\n",
        "        # Monomer models are sorted by mean pLDDT. Do not put monomer pTM models here as they\n",
        "        # should never get selected.\n",
        "        ranking_confidences[model_name] = prediction['ranking_confidence']\n",
        "        plddts[model_name] = prediction['plddt']\n",
Augustin Zidek's avatar
Augustin Zidek committed
616
        "    elif model_type_to_use == ModelType.MULTIMER:\n",
617
618
619
620
621
622
623
624
625
626
627
628
629
630
        "      # Multimer models are sorted by pTM+ipTM.\n",
        "      ranking_confidences[model_name] = prediction['ranking_confidence']\n",
        "      plddts[model_name] = prediction['plddt']\n",
        "      pae_outputs[model_name] = (prediction['predicted_aligned_error'],\n",
        "                                 prediction['max_predicted_aligned_error'])\n",
        "\n",
        "    # Set the b-factors to the per-residue plddt.\n",
        "    final_atom_mask = prediction['structure_module']['final_atom_mask']\n",
        "    b_factors = prediction['plddt'][:, None] * final_atom_mask\n",
        "    unrelaxed_protein = protein.from_prediction(\n",
        "        processed_feature_dict,\n",
        "        prediction,\n",
        "        b_factors=b_factors,\n",
        "        remove_leading_feature_dimension=(\n",
Augustin Zidek's avatar
Augustin Zidek committed
631
        "            model_type_to_use == ModelType.MONOMER))\n",
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
        "    unrelaxed_proteins[model_name] = unrelaxed_protein\n",
        "\n",
        "    # Delete unused outputs to save memory.\n",
        "    del model_runner\n",
        "    del params\n",
        "    del prediction\n",
        "    pbar.update(n=1)\n",
        "\n",
        "  # --- AMBER relax the best model ---\n",
        "\n",
        "  # Find the best model according to the mean pLDDT.\n",
        "  best_model_name = max(ranking_confidences.keys(), key=lambda x: ranking_confidences[x])\n",
        "\n",
        "  if run_relax:\n",
        "    pbar.set_description(f'AMBER relaxation')\n",
        "    amber_relaxer = relax.AmberRelaxation(\n",
        "        max_iterations=0,\n",
        "        tolerance=2.39,\n",
        "        stiffness=10.0,\n",
        "        exclude_residues=[],\n",
Augustin Zidek's avatar
Augustin Zidek committed
652
        "        max_outer_iterations=3,\n",
653
        "        use_gpu=relax_use_gpu)\n",
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
        "    relaxed_pdb, _, _ = amber_relaxer.process(prot=unrelaxed_proteins[best_model_name])\n",
        "  else:\n",
        "    print('Warning: Running without the relaxation stage.')\n",
        "    relaxed_pdb = protein.to_pdb(unrelaxed_proteins[best_model_name])\n",
        "  pbar.update(n=1)  # Finished AMBER relax.\n",
        "\n",
        "# Construct multiclass b-factors to indicate confidence bands\n",
        "# 0=very low, 1=low, 2=confident, 3=very high\n",
        "banded_b_factors = []\n",
        "for plddt in plddts[best_model_name]:\n",
        "  for idx, (min_val, max_val, _) in enumerate(PLDDT_BANDS):\n",
        "    if plddt \u003e= min_val and plddt \u003c= max_val:\n",
        "      banded_b_factors.append(idx)\n",
        "      break\n",
        "banded_b_factors = np.array(banded_b_factors)[:, None] * final_atom_mask\n",
        "to_visualize_pdb = utils.overwrite_b_factors(relaxed_pdb, banded_b_factors)\n",
        "\n",
        "\n",
        "# Write out the prediction\n",
        "pred_output_path = os.path.join(output_dir, 'selected_prediction.pdb')\n",
        "with open(pred_output_path, 'w') as f:\n",
        "  f.write(relaxed_pdb)\n",
        "\n",
        "\n",
        "# --- Visualise the prediction \u0026 confidence ---\n",
        "show_sidechains = True\n",
        "def plot_plddt_legend():\n",
        "  \"\"\"Plots the legend for pLDDT.\"\"\"\n",
        "  thresh = ['Very low (pLDDT \u003c 50)',\n",
        "            'Low (70 \u003e pLDDT \u003e 50)',\n",
        "            'Confident (90 \u003e pLDDT \u003e 70)',\n",
        "            'Very high (pLDDT \u003e 90)']\n",
        "\n",
        "  colors = [x[2] for x in PLDDT_BANDS]\n",
        "\n",
        "  plt.figure(figsize=(2, 2))\n",
        "  for c in colors:\n",
        "    plt.bar(0, 0, color=c)\n",
        "  plt.legend(thresh, frameon=False, loc='center', fontsize=20)\n",
        "  plt.xticks([])\n",
        "  plt.yticks([])\n",
        "  ax = plt.gca()\n",
        "  ax.spines['right'].set_visible(False)\n",
        "  ax.spines['top'].set_visible(False)\n",
        "  ax.spines['left'].set_visible(False)\n",
        "  ax.spines['bottom'].set_visible(False)\n",
        "  plt.title('Model Confidence', fontsize=20, pad=20)\n",
        "  return plt\n",
        "\n",
        "# Show the structure coloured by chain if the multimer model has been used.\n",
Augustin Zidek's avatar
Augustin Zidek committed
704
        "if model_type_to_use == ModelType.MULTIMER:\n",
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
        "  multichain_view = py3Dmol.view(width=800, height=600)\n",
        "  multichain_view.addModelsAsFrames(to_visualize_pdb)\n",
        "  multichain_style = {'cartoon': {'colorscheme': 'chain'}}\n",
        "  multichain_view.setStyle({'model': -1}, multichain_style)\n",
        "  multichain_view.zoomTo()\n",
        "  multichain_view.show()\n",
        "\n",
        "# Color the structure by per-residue pLDDT\n",
        "color_map = {i: bands[2] for i, bands in enumerate(PLDDT_BANDS)}\n",
        "view = py3Dmol.view(width=800, height=600)\n",
        "view.addModelsAsFrames(to_visualize_pdb)\n",
        "style = {'cartoon': {'colorscheme': {'prop': 'b', 'map': color_map}}}\n",
        "if show_sidechains:\n",
        "  style['stick'] = {}\n",
        "view.setStyle({'model': -1}, style)\n",
        "view.zoomTo()\n",
        "\n",
        "grid = GridspecLayout(1, 2)\n",
        "out = Output()\n",
        "with out:\n",
        "  view.show()\n",
        "grid[0, 0] = out\n",
        "\n",
        "out = Output()\n",
        "with out:\n",
        "  plot_plddt_legend().show()\n",
        "grid[0, 1] = out\n",
        "\n",
        "display.display(grid)\n",
        "\n",
        "# Display pLDDT and predicted aligned error (if output by the model).\n",
        "if pae_outputs:\n",
        "  num_plots = 2\n",
        "else:\n",
        "  num_plots = 1\n",
        "\n",
        "plt.figure(figsize=[8 * num_plots, 6])\n",
        "plt.subplot(1, num_plots, 1)\n",
        "plt.plot(plddts[best_model_name])\n",
        "plt.title('Predicted LDDT')\n",
        "plt.xlabel('Residue')\n",
        "plt.ylabel('pLDDT')\n",
        "\n",
        "if num_plots == 2:\n",
        "  plt.subplot(1, 2, 2)\n",
        "  pae, max_pae = list(pae_outputs.values())[0]\n",
        "  plt.imshow(pae, vmin=0., vmax=max_pae, cmap='Greens_r')\n",
        "  plt.colorbar(fraction=0.046, pad=0.04)\n",
        "\n",
        "  # Display lines at chain boundaries.\n",
        "  best_unrelaxed_prot = unrelaxed_proteins[best_model_name]\n",
        "  total_num_res = best_unrelaxed_prot.residue_index.shape[-1]\n",
        "  chain_ids = best_unrelaxed_prot.chain_index\n",
        "  for chain_boundary in np.nonzero(chain_ids[:-1] - chain_ids[1:]):\n",
759
760
761
        "    if chain_boundary.size:\n",
        "      plt.plot([0, total_num_res], [chain_boundary, chain_boundary], color='red')\n",
        "      plt.plot([chain_boundary, chain_boundary], [0, total_num_res], color='red')\n",
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
        "\n",
        "  plt.title('Predicted Aligned Error')\n",
        "  plt.xlabel('Scored residue')\n",
        "  plt.ylabel('Aligned residue')\n",
        "\n",
        "# Save the predicted aligned error (if it exists).\n",
        "pae_output_path = os.path.join(output_dir, 'predicted_aligned_error.json')\n",
        "if pae_outputs:\n",
        "  # Save predicted aligned error in the same format as the AF EMBL DB.\n",
        "  pae_data = notebook_utils.get_pae_json(pae=pae, max_pae=max_pae.item())\n",
        "  with open(pae_output_path, 'w') as f:\n",
        "    f.write(pae_data)\n",
        "\n",
        "# --- Download the predictions ---\n",
        "!zip -q -r {output_dir}.zip {output_dir}\n",
        "files.download(f'{output_dir}.zip')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lUQAn5LYC5n4"
      },
      "source": [
        "### Interpreting the prediction\n",
        "\n",
        "In general predicted LDDT (pLDDT) is best used for intra-domain confidence, whereas Predicted Aligned Error (PAE) is best used for determining between domain or between chain confidence.\n",
        "\n",
        "Please see the [AlphaFold methods paper](https://www.nature.com/articles/s41586-021-03819-2), the [AlphaFold predictions of the human proteome paper](https://www.nature.com/articles/s41586-021-03828-1), and the [AlphaFold-Multimer paper](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) as well as [our FAQ](https://alphafold.ebi.ac.uk/faq) on how to interpret AlphaFold predictions."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jeb2z8DIA4om"
      },
      "source": [
        "## FAQ \u0026 Troubleshooting\n",
        "\n",
        "\n",
        "*   How do I get a predicted protein structure for my protein?\n",
        "    *   Click on the _Connect_ button on the top right to get started.\n",
        "    *   Paste the amino acid sequence of your protein (without any headers) into the “Enter the amino acid sequence to fold”.\n",
Augustin Zidek's avatar
Augustin Zidek committed
805
        "    *   Run all cells in the Colab, either by running them individually (with the play button on the left side) or via _Runtime_ \u003e _Run all._ Make sure you run all 5 cells in order.\n",
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
        "    *   The predicted protein structure will be downloaded once all cells have been executed. Note: This can take minutes to hours - see below.\n",
        "*   How long will this take?\n",
        "    *   Downloading the AlphaFold source code can take up to a few minutes.\n",
        "    *   Downloading and installing the third-party software can take up to a few minutes.\n",
        "    *   The search against genetic databases can take minutes to hours.\n",
        "    *   Running AlphaFold and generating the prediction can take minutes to hours, depending on the length of your protein and on which GPU-type Colab has assigned you.\n",
        "*   My Colab no longer seems to be doing anything, what should I do?\n",
        "    *   Some steps may take minutes to hours to complete.\n",
        "    *   If nothing happens or if you receive an error message, try restarting your Colab runtime via _Runtime_ \u003e _Restart runtime_.\n",
        "    *   If this doesn’t help, try resetting your Colab runtime via _Runtime_ \u003e _Factory reset runtime_.\n",
        "*   How does this compare to the open-source version of AlphaFold?\n",
        "    *   This Colab version of AlphaFold searches a selected portion of the BFD dataset and currently doesn’t use templates, so its accuracy is reduced in comparison to the full version of AlphaFold that is described in the [AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2) and [Github repo](https://github.com/deepmind/alphafold/) (the full version is available via the inference script).\n",
        "*   What is a Colab?\n",
        "    *   See the [Colab FAQ](https://research.google.com/colaboratory/faq.html).\n",
        "*   I received a warning “Notebook requires high RAM”, what do I do?\n",
        "    *   The resources allocated to your Colab vary. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details.\n",
        "    *   You can execute the Colab nonetheless.\n",
        "*   I received an error “Colab CPU runtime not supported” or “No GPU/TPU found”, what do I do?\n",
        "    *   Colab CPU runtime is not supported. Try changing your runtime via _Runtime_ \u003e _Change runtime type_ \u003e _Hardware accelerator_ \u003e _GPU_.\n",
        "    *   The type of GPU allocated to your Colab varies. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html) for more details.\n",
        "    *   If you receive “Cannot connect to GPU backend”, you can try again later to see if Colab allocates you a GPU.\n",
        "    *   [Colab Pro](https://colab.research.google.com/signup) offers priority access to GPUs.\n",
        "*   I received an error “ModuleNotFoundError: No module named ...”, even though I ran the cell that imports it, what do I do?\n",
        "    *   Colab notebooks on the free tier time out after a certain amount of time. See the [Colab FAQ](https://research.google.com/colaboratory/faq.html#idle-timeouts). Try rerunning the whole notebook from the beginning.\n",
        "*   Does this tool install anything on my computer?\n",
        "    *   No, everything happens in the cloud on Google Colab.\n",
        "    *   At the end of the Colab execution a zip-archive with the obtained prediction will be automatically downloaded to your computer.\n",
        "*   How should I share feedback and bug reports?\n",
        "    *   Please share any feedback and bug reports as an [issue](https://github.com/deepmind/alphafold/issues) on Github.\n",
        "\n",
        "\n",
        "## Related work\n",
        "\n",
        "Take a look at these Colab notebooks provided by the community (please note that these notebooks may vary from our validated AlphaFold system and we cannot guarantee their accuracy):\n",
        "\n",
        "*   The [ColabFold AlphaFold2 notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb) by Sergey Ovchinnikov, Milot Mirdita and Martin Steinegger, which uses an API hosted at the Södinglab based on the MMseqs2 server ([Mirdita et al. 2019, Bioinformatics](https://academic.oup.com/bioinformatics/article/35/16/2856/5280135)) for the multiple sequence alignment creation.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YfPhvYgKC81B"
      },
      "source": [
        "# License and Disclaimer\n",
        "\n",
        "This is not an officially-supported Google product.\n",
        "\n",
        "This Colab notebook and other information provided is for theoretical modelling only, caution should be exercised in its use. It is provided ‘as-is’ without any warranty of any kind, whether expressed or implied. Information is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice.\n",
        "\n",
        "Copyright 2021 DeepMind Technologies Limited.\n",
        "\n",
        "\n",
        "## AlphaFold Code License\n",
        "\n",
        "Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0.\n",
        "\n",
        "Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n",
        "\n",
        "## Model Parameters License\n",
        "\n",
867
        "The AlphaFold parameters are made available under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You can find details at: https://creativecommons.org/licenses/by/4.0/legalcode\n",
868
869
870
871
872
873
874
875
876
877
        "\n",
        "\n",
        "## Third-party software\n",
        "\n",
        "Use of the third-party software, libraries or code referred to in the [Acknowledgements section](https://github.com/deepmind/alphafold/#acknowledgements) in the AlphaFold README may be governed by separate terms and conditions or license provisions. Your use of the third-party software, libraries or code is subject to any such terms and you should check that you can comply with any applicable restrictions or terms and conditions before use.\n",
        "\n",
        "\n",
        "## Mirrored Databases\n",
        "\n",
        "The following databases have been mirrored by DeepMind, and are available with reference to the following:\n",
Augustin Zidek's avatar
Augustin Zidek committed
878
879
880
        "* UniProt: v2021\\_04 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/).\n",
        "* UniRef90: v2022\\_01 (unmodified), by The UniProt Consortium, available under a [Creative Commons Attribution-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nd/4.0/).\n",
        "* MGnify: v2022\\_05 (unmodified), by Mitchell AL et al., available free of all copyright restrictions and made fully and freely available for both non-commercial and commercial use under [CC0 1.0 Universal (CC0 1.0) Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/).\n",
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
        "* BFD: (modified), by Steinegger M. and Söding J., modified by DeepMind, available under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by/4.0/). See the Methods section of the [AlphaFold proteome paper](https://www.nature.com/articles/s41586-021-03828-1) for details."
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "AlphaFold.ipynb",
      "private_outputs": true,
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
899
  },
900
901
  "nbformat": 4,
  "nbformat_minor": 0
902
}