- 25 Oct, 2023 1 commit
-
-
Sachin Kadyan authored
-
- 24 Oct, 2023 4 commits
-
-
Sachin Kadyan authored
-
Sachin Kadyan authored
Bugfix: Corrected paths for just-in-time embedding generation
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
- 23 Oct, 2023 2 commits
-
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
- 21 Oct, 2023 1 commit
-
-
Sachin Kadyan authored
-
- 19 Oct, 2023 1 commit
-
-
Gustaf Ahdritz authored
-
- 11 Oct, 2023 1 commit
-
-
Sachin Kadyan authored
-
- 10 Oct, 2023 30 commits
-
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Gustaf Ahdritz authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
sachinkadyan7 authored
-
sachinkadyan7 authored
-
sachinkadyan7 authored
-
sachinkadyan7 authored
-
Sachin Kadyan authored
Fix for a bug in data_transforms which wouldn't allow creation of MSA mask if there is only input sequence in MSA. - Set `max_msa_clusters=1` in model presets for allowing the input sequence to be a MSA cluster centre.
-
Sachin Kadyan authored
- In `seqemb_mode`, `process_pdb` loads sequence embedding for the PDB's protein, and a dummy MSA
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
- In `seq_emb` mode, the AlignmentRunner works only on generating templates.
-
Sachin Kadyan authored
Added the seq_emb features to the list of features to be processed by feature pipeline , if using seq_emb mode - In `seq_emb` mode, add list of `seq_emb` features to `feature_names`
-
Sachin Kadyan authored
- `_process_seqemb_features` now returns a dictionary instead of a tensor.
-
Sachin Kadyan authored
- Bugfix: `torch` throws warnings when copying a tensor via initialization - Added lambda to `.clone()` those tensors instead
-
Sachin Kadyan authored
-
Sachin Kadyan authored
- Turn on `seqemb` mode in `data`, `model`, and `globals` config when using `seqemb` training preset. - Set configuration options specific for finetuning in general.
-
Sachin Kadyan authored
- Turn on `seqemb` mode in `data`, `model`, and `globals` config when using `seqemb` training preset.
-
Sachin Kadyan authored
Added passing of sequence embedding mode flag from `data_modules` to `data_pipeline` for training and inference pipelines. - Passing the config.data.seqemb_mode.enabled flag to the FASTA, PDB, and MMCIF data pipelines.
-
Sachin Kadyan authored
- Use sequence embedding files when in `seqemb` mode. - Make dummy MSA features for MMCIF when using `seqemb` mode.
-
Sachin Kadyan authored
- Added flag `no_column_attention` in evoformer config. - Added check in `evoformer.py` to switch off `MSAColumnAttention` when the config flag `no_column_attention` is `True`.
-