- 23 Oct, 2023 4 commits
-
-
Matthew W. Thompson authored
-
Matthew W. Thompson authored
-
Matthew W. Thompson authored
-
Matthew W. Thompson authored
-
- 19 Oct, 2023 1 commit
-
-
Gustaf Ahdritz authored
-
- 11 Oct, 2023 1 commit
-
-
Sachin Kadyan authored
-
- 10 Oct, 2023 34 commits
-
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Gustaf Ahdritz authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
sachinkadyan7 authored
-
sachinkadyan7 authored
-
sachinkadyan7 authored
-
sachinkadyan7 authored
-
Sachin Kadyan authored
Fix for a bug in data_transforms which wouldn't allow creation of MSA mask if there is only input sequence in MSA. - Set `max_msa_clusters=1` in model presets for allowing the input sequence to be a MSA cluster centre.
-
Sachin Kadyan authored
- In `seqemb_mode`, `process_pdb` loads sequence embedding for the PDB's protein, and a dummy MSA
-
Sachin Kadyan authored
-
Sachin Kadyan authored
-
Sachin Kadyan authored
- In `seq_emb` mode, the AlignmentRunner works only on generating templates.
-
Sachin Kadyan authored
Added the seq_emb features to the list of features to be processed by feature pipeline , if using seq_emb mode - In `seq_emb` mode, add list of `seq_emb` features to `feature_names`
-
Sachin Kadyan authored
- `_process_seqemb_features` now returns a dictionary instead of a tensor.
-
Sachin Kadyan authored
- Bugfix: `torch` throws warnings when copying a tensor via initialization - Added lambda to `.clone()` those tensors instead
-
Sachin Kadyan authored
-
Sachin Kadyan authored
- Turn on `seqemb` mode in `data`, `model`, and `globals` config when using `seqemb` training preset. - Set configuration options specific for finetuning in general.
-
Sachin Kadyan authored
- Turn on `seqemb` mode in `data`, `model`, and `globals` config when using `seqemb` training preset.
-
Sachin Kadyan authored
Added passing of sequence embedding mode flag from `data_modules` to `data_pipeline` for training and inference pipelines. - Passing the config.data.seqemb_mode.enabled flag to the FASTA, PDB, and MMCIF data pipelines.
-
Sachin Kadyan authored
- Use sequence embedding files when in `seqemb` mode. - Make dummy MSA features for MMCIF when using `seqemb` mode.
-
Sachin Kadyan authored
- Added flag `no_column_attention` in evoformer config. - Added check in `evoformer.py` to switch off `MSAColumnAttention` when the config flag `no_column_attention` is `True`.
-
Sachin Kadyan authored
- Added `preembedding_embedder` config dictionary in `config` - Added `preemb_dim_size` property in `config` for specifying single seq embedding size.
-
Sachin Kadyan authored
- `seqemb_mode_enabled` added as a configuration option. - `model.py` switches to using the `PreembeddingEmbedder` when the flag is `True`.
-
Sachin Kadyan authored
- Added `use_single_seq_mode` flag in inference script arguments. - Passed on the flag to the FASTA file `data_processor`.
-
Sachin Kadyan authored
- Added a method to load and process sequence embedding `*.pt` files. - In `seqemb_mode`, now add seqemb features to the feature dictionary.
-