1. 18 Mar, 2024 1 commit
    • Yoach Lacombe's avatar
      Add MusicGen Melody (#28819) · c43b380e
      Yoach Lacombe authored
      
      
      * first modeling code
      
      * make repository
      
      * still WIP
      
      * update model
      
      * add tests
      
      * add latest change
      
      * clean docstrings and copied from
      
      * update docstrings md and readme
      
      * correct chroma function
      
      * correct copied from and remove unreleated test
      
      * add doc to toctree
      
      * correct imports
      
      * add convert script to notdoctested
      
      * Add suggestion from Sanchit
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct get_uncoditional_inputs docstrings
      
      * modify README according to SANCHIT feedback
      
      * add chroma to audio utils
      
      * clean librosa and torchaudio hard dependencies
      
      * fix FE
      
      * refactor audio decoder -> audio encoder for consistency with previous musicgen
      
      * refactor conditional -> encoder
      
      * modify sampling rate logics
      
      * modify license at the beginning
      
      * refactor all_self_attns->all_attentions
      
      * remove ignore copy from causallm generate
      
      * add copied from for from_sub_models
      
      * fix make copies
      
      * add warning if audio is truncated
      
      * add copied from where relevant
      
      * remove artefact
      
      * fix convert script
      
      * fix torchaudio and FE
      
      * modify chroma method according to feedback-> better naming
      
      * refactor input_values->input_features
      
      * refactor input_values->input_features and fix import fe
      
      * add input_features to docstrigs
      
      * correct inputs_embeds logics
      
      * remove dtype conversion
      
      * refactor _prepare_conditional_hidden_states_kwargs_for_generation ->_prepare_encoder_hidden_states_kwargs_for_generation
      
      * change warning for chroma length
      
      * Update src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * change way to save wav, using soundfile
      
      * correct docs and change to soundfile
      
      * fix import
      
      * fix init proj layers
      
      * remove line breaks from md
      
      * fix issue with docstrings
      
      * add FE suggestions
      
      * improve is in logics and remove useless imports
      
      * remove custom from_pretrained
      
      * simplify docstring code
      
      * add suggestions for modeling tests
      
      * make style
      
      * update converting script with sanity check
      
      * remove encoder attention mask from conditional generation
      
      * replace musicgen melody checkpoints with official orga
      
      * rename ylacombe->facebook in checkpoints
      
      * fix copies
      
      * remove unecessary warning
      
      * add shape in code docstrings
      
      * add files to slow doc tests
      
      * fix md bug and add md to not_tested
      
      * make fix-copies
      
      * fix hidden states test and batching
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      c43b380e
  2. 16 Feb, 2024 1 commit
  3. 14 Feb, 2024 1 commit
  4. 02 Feb, 2024 1 commit
  5. 30 Jan, 2024 1 commit
    • Matt's avatar
      Add tf_keras imports to prepare for Keras 3 (#28588) · 415e9a09
      Matt authored
      * Port core files + ESM (because ESM code is odd)
      
      * Search-replace in modelling code
      
      * Fix up transfo_xl as well
      
      * Fix other core files + tests (still need to add correct import to tests)
      
      * Fix cookiecutter
      
      * make fixup, fix imports in some more core files
      
      * Auto-add imports to tests
      
      * Cleanup, add imports to sagemaker tests
      
      * Use correct exception for importing tf_keras
      
      * Fixes in modeling_tf_utils
      
      * make fixup
      
      * Correct version parsing code
      
      * Ensure the pipeline tests correctly revert to float32 after each test
      
      * Ensure the pipeline tests correctly revert to float32 after each test
      
      * More tf.keras -> keras
      
      * Add dtype cast
      
      * Better imports of tf_keras
      
      * Add a cast for tf.assign, just in case
      
      * Fix callback imports
      415e9a09
  6. 29 Jan, 2024 1 commit
  7. 23 Jan, 2024 1 commit
  8. 15 Jan, 2024 1 commit
  9. 22 Dec, 2023 1 commit
  10. 20 Dec, 2023 1 commit
    • amyeroberts's avatar
      Align backbone stage selection with out_indices & out_features (#27606) · ee298a16
      amyeroberts authored
      * Iteratre over out_features instead of stage_names
      
      * Update for all backbones
      
      * Add tests
      
      * Fix
      
      * Align timm backbone behaviour with other backbones
      
      * Fix tests
      
      * Stricter checks on set out_features and out_indices
      
      * Revert back stage selection logic
      
      * Remove out-of-order logic
      
      * Document restriction in docstrings
      ee298a16
  11. 19 Dec, 2023 1 commit
  12. 18 Dec, 2023 1 commit
    • Matt's avatar
      More TF fixes (#28081) · 71d47f0a
      Matt authored
      * More build_in_name_scope()
      
      * Make sure we set the save spec now we don't do it with dummies anymore
      
      * make fixup
      71d47f0a
  13. 15 Dec, 2023 1 commit
  14. 08 Dec, 2023 1 commit
    • fxmarty's avatar
      F.scaled_dot_product_attention support (#26572) · 80377eb0
      fxmarty authored
      
      
      * add sdpa
      
      * wip
      
      * cleaning
      
      * add ref
      
      * yet more cleaning
      
      * and more :)
      
      * wip llama
      
      * working llama
      
      * add output_attentions=True support
      
      * bigcode sdpa support
      
      * fixes
      
      * gpt-bigcode support, require torch>=2.1.1
      
      * add falcon support
      
      * fix conflicts falcon
      
      * style
      
      * fix attention_mask definition
      
      * remove output_attentions from attnmaskconverter
      
      * support whisper without removing any Copied from statement
      
      * fix mbart default to eager renaming
      
      * fix typo in falcon
      
      * fix is_causal in SDPA
      
      * check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained
      
      * add warnings when falling back on the manual implementation
      
      * precise doc
      
      * wip replace _flash_attn_enabled by config.attn_implementation
      
      * fix typo
      
      * add tests
      
      * style
      
      * add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace
      
      * obey to config.attn_implementation if a config is passed in from_pretrained
      
      * fix is_torch_sdpa_available when torch is not installed
      
      * remove dead code
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bart/modeling_bart.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove duplicate pretraining_tp code
      
      * add dropout in llama
      
      * precise comment on attn_mask
      
      * add fmt: off for _unmask_unattended docstring
      
      * precise num_masks comment
      
      * nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion
      
      * cleanup modeling_utils
      
      * backward compatibility
      
      * fix style as requested
      
      * style
      
      * improve documentation
      
      * test pass
      
      * style
      
      * add _unmask_unattended tests
      
      * skip meaningless tests for idefics
      
      * hard_check SDPA requirements when specifically requested
      
      * standardize the use if XXX_ATTENTION_CLASSES
      
      * fix SDPA bug with mem-efficient backend on CUDA when using fp32
      
      * fix test
      
      * rely on SDPA is_causal parameter to handle the causal mask in some cases
      
      * fix FALCON_ATTENTION_CLASSES
      
      * remove _flash_attn_2_enabled occurences
      
      * fix test
      
      * add OPT to the list of supported flash models
      
      * improve test
      
      * properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test
      
      * remove remaining _flash_attn_2_enabled occurence
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/modeling_attn_mask_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update docs/source/en/perf_infer_gpu_one.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove use_attn_implementation
      
      * fix docstring & slight bug
      
      * make attn_implementation internal (_attn_implementation)
      
      * typos
      
      * fix tests
      
      * deprecate use_flash_attention_2=True
      
      * fix test
      
      * add back llama that was removed by mistake
      
      * fix tests
      
      * remove _flash_attn_2_enabled occurences bis
      
      * add check & test that passed attn_implementation is valid
      
      * fix falcon torchscript export
      
      * fix device of mask in tests
      
      * add tip about torch.jit.trace and move bt doc below sdpa
      
      * fix parameterized.expand order
      
      * move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there
      
      * update sdpaattention class with the new cache
      
      * Update src/transformers/configuration_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/bark/modeling_bark.py
      
      * address review comments
      
      * WIP torch.jit.trace fix. left: test both eager & sdpa
      
      * add test for torch.jit.trace for both eager/sdpa
      
      * fix falcon with torch==2.0 that needs to use sdpa
      
      * fix doc
      
      * hopefully last fix
      
      * fix key_value_length that has no default now in mask converter
      
      * is it flacky?
      
      * fix speculative decoding bug
      
      * tests do pass
      
      * fix following #27907
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      80377eb0
  15. 23 Nov, 2023 1 commit
  16. 13 Nov, 2023 1 commit
  17. 31 Oct, 2023 1 commit
  18. 24 Oct, 2023 1 commit
  19. 06 Oct, 2023 1 commit
  20. 25 Sep, 2023 1 commit
  21. 21 Sep, 2023 1 commit
  22. 19 Sep, 2023 1 commit
  23. 14 Sep, 2023 1 commit
  24. 05 Sep, 2023 1 commit
  25. 29 Aug, 2023 1 commit
  26. 24 Aug, 2023 1 commit
  27. 11 Aug, 2023 1 commit
  28. 09 Aug, 2023 1 commit
  29. 08 Aug, 2023 1 commit
  30. 03 Aug, 2023 1 commit
  31. 31 Jul, 2023 1 commit
  32. 28 Jul, 2023 1 commit
  33. 30 Jun, 2023 1 commit
  34. 28 Jun, 2023 1 commit
  35. 20 Jun, 2023 3 commits
  36. 16 Jun, 2023 1 commit
    • Matt's avatar
      Add test for proper TF input signatures (#24320) · 91389950
      Matt authored
      * Add test for proper input signatures
      
      * No more signature pruning
      
      * Test the dummy inputs are valid too
      
      * fine-tine -> fine-tune
      
      * Fix indent in test_dataset_conversion
      91389950
  37. 13 Jun, 2023 1 commit
    • Matt's avatar
      Stop storing references to bound methods via tf.function (#24146) · 3bd1fe43
      Matt authored
      * Stop storing references to bound methods in tf.functions
      
      * Remove the gc.collect calls now that we resolved the underlying problem
      
      * Remove the default signature from model.serving entirely, big cleanup
      
      * Remove _prune_signature as self.input_signature can prune itself
      
      * Restore serving docstring
      
      * Update int support test to check the input signature
      
      * Make sure other tests also use model.input_signature and not serving.input_signature
      
      * Restore _prune_signature
      
      * Remove the doctest GC now it's no longer needed
      
      * Correct core tests to use the pruned sig
      
      * order lines correctly in core tests
      
      * Add eager_serving back with a deprecation warning
      3bd1fe43
  38. 08 Jun, 2023 1 commit