1. 14 Nov, 2023 2 commits
    • Sihan Chen's avatar
      Add speecht5 batch generation and fix wrong attention mask when padding (#25943) · 4309abed
      Sihan Chen authored
      * fix speecht5 wrong attention mask when padding
      
      * enable batch generation and add parameter attention_mask
      
      * fix doc
      
      * fix format
      
      * batch postnet inputs, return batched lengths, and consistent to old api
      
      * fix format
      
      * fix format
      
      * fix the format
      
      * fix doc-builder error
      
      * add test, cross attention and docstring
      
      * optimize code based on reviews
      
      * docbuild
      
      * refine
      
      * not skip slow test
      
      * add consistent dropout for batching
      
      * loose atol
      
      * add another test regarding to the consistency of vocoder
      
      * fix format
      
      * refactor
      
      * add return_concrete_lengths as parameter for consistency w/wo batching
      
      * fix review issues
      
      * fix cross_attention issue
      4309abed
    • Arthur's avatar
      [`CI-test_torch`] skip `test_tf_from_pt_safetensors` for 4 models (#27481) · e107ae36
      Arthur authored
      * skip 4 tests
      
      * nits
      
      * style
      
      * wow it's not my day
      e107ae36
  2. 13 Nov, 2023 4 commits
    • Gift Sinthong's avatar
      [time series] Add PatchTST (#25927) · 2ac5b932
      Gift Sinthong authored
      
      
      * Initial commit of PatchTST model classes
      Co-authored-by: default avatarPhanwadee Sinthong <phsinthong@gmail.com>
      Co-authored-by: default avatarNam Nguyen <namctin@gmail.com>
      Co-authored-by: default avatarVijay Ekambaram <vijaykr.e@gmail.com>
      Co-authored-by: default avatarNgoc Diep Do <55230119+diepi@users.noreply.github.com>
      Co-authored-by: default avatarWesley Gifford <79663411+wgifford@users.noreply.github.com>
      
      * Add PatchTSTForPretraining
      
      * update to include classification
      Co-authored-by: default avatarPhanwadee Sinthong <phsinthong@gmail.com>
      Co-authored-by: default avatarNam Nguyen <namctin@gmail.com>
      Co-authored-by: default avatarVijay Ekambaram <vijaykr.e@gmail.com>
      Co-authored-by: default avatarNgoc Diep Do <55230119+diepi@users.noreply.github.com>
      Co-authored-by: default avatarWesley Gifford <79663411+wgifford@users.noreply.github.com>
      
      * clean up auto files
      
      * Add PatchTSTForPrediction
      
      * Fix relative import
      
      * Replace original PatchTSTEncoder with ChannelAttentionPatchTSTEncoder
      
      * temporary adding absolute path + add PatchTSTForForecasting class
      
      * Update base PatchTSTModel + Unittest
      
      * Update ForecastHead to use the config class
      
      * edit cv_random_masking, add mask to model output
      
      * Update configuration_patchtst.py
      
      * add masked_loss to the pretraining
      
      * add PatchEmbeddings
      
      * Update configuration_patchtst.py
      
      * edit loss which considers mask in the pretraining
      
      * remove patch_last option
      
      * Add commits from internal repo
      
      * Update ForecastHead
      
      * Add model weight initilization + unittest
      
      * Update PatchTST unittest to use local import
      
      * PatchTST integration tests for pretraining and prediction
      
      * Added PatchTSTForRegression + update unittest to include label generation
      
      * Revert unrelated model test file
      
      * Combine similar output classes
      
      * update PredictionHead
      
      * Update configuration_patchtst.py
      
      * Add Revin
      
      * small edit to PatchTSTModelOutputWithNoAttention
      
      * Update modeling_patchtst.py
      
      * Updating integration test for forecasting
      
      * Fix unittest after class structure changed
      
      * docstring updates
      
      * change input_size to num_input_channels
      
      * more formatting
      
      * Remove some unused params
      
      * Add a comment for pretrained models
      
      * add channel_attention option
      
      add channel_attention option and remove unused positional encoders.
      
      * Update PatchTST models to use HF's MultiHeadAttention module
      
      * Update paper + github urls
      
      * Fix hidden_state return value
      
      * Update integration test to use PatchTSTForForecasting
      
      * Adding dataclass decorator for model output classes
      
      * Run fixup script
      
      * Rename model repos for integration test
      
      * edit argument explanation
      
      * change individual option to shared_projection
      
      * style
      
      * Rename integration test + import cleanup
      
      * Fix outpu_hidden_states return value
      
      * removed unused mode
      
      * added std, mean and nops scaler
      
      * add initial distributional loss for predition
      
      * fix typo in docs
      
      * add generate function
      
      * formatting
      
      * add num_parallel_samples
      
      * Fix a typo
      
      * copy weighted_average function, edit PredictionHead
      
      * edit PredictionHead
      
      * add distribution head to forecasting
      
      * formatting
      
      * Add generate function for forecasting
      
      * Add generate function to prediction task
      
      * formatting
      
      * use argsort
      
      * add past_observed_mask ordering
      
      * fix arguments
      
      * docs
      
      * add back test_model_outputs_equivalence test
      
      * formatting
      
      * cleanup
      
      * formatting
      
      * use ACT2CLS
      
      * formatting
      
      * fix add_start_docstrings decorator
      
      * add distribution head and generate function to regression task
      
      add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput,  PatchTSTForRegressionOutput.
      
      * add distribution head and generate function to regression task
      
      add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput,  PatchTSTForRegressionOutput.
      
      * fix typos
      
      * add forecast_masking
      
      * fixed tests
      
      * use set_seed
      
      * fix doc test
      
      * formatting
      
      * Update docs/source/en/model_doc/patchtst.md
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * better var names
      
      * rename PatchTSTTranspose
      
      * fix argument names and docs string
      
      * remove compute_num_patches and unused class
      
      * remove assert
      
      * renamed to PatchTSTMasking
      
      * use num_labels for classification
      
      * use num_labels
      
      * use default num_labels from super class
      
      * move model_type after docstring
      
      * renamed PatchTSTForMaskPretraining
      
      * bs -> batch_size
      
      * more review fixes
      
      * use hidden_state
      
      * rename encoder layer and block class
      
      * remove commented seed_number
      
      * edit docstring
      
      * Add docstring
      
      * formatting
      
      * use past_observed_mask
      
      * doc suggestion
      
      * make fix-copies
      
      * use Args:
      
      * add docstring
      
      * add docstring
      
      * change some variable names and add PatchTST before some class names
      
      * formatting
      
      * fix argument types
      
      * fix tests
      
      * change x variable to patch_input
      
      * format
      
      * formatting
      
      * fix-copies
      
      * Update tests/models/patchtst/test_modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * move loss to forward
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * formatting
      
      * fix a bug when pre_norm is set to True
      
      * output_hidden_states is set to False as default
      
      * set pre_norm=True as default
      
      * format docstring
      
      * format
      
      * output_hidden_states is None by default
      
      * add missing docs
      
      * better var names
      
      * docstring: remove default to False in output_hidden_states
      
      * change labels name to target_values in regression task
      
      * format
      
      * fix tests
      
      * change to forecast_mask_ratios and random_mask_ratio
      
      * change mask names
      
      * change future_values to target_values param in the prediction class
      
      * remove nn.Sequential and make PatchTSTBatchNorm class
      
      * black
      
      * fix argument name for prediction
      
      * add output_attentions option
      
      * add output_attentions to PatchTSTEncoder
      
      * formatting
      
      * Add attention output option to all classes
      
      * Remove PatchTSTEncoderBlock
      
      * create PatchTSTEmbedding class
      
      * use config in PatchTSTPatchify
      
      * Use config in PatchTSTMasking class
      
      * add channel_attn_weights
      
      * Add PatchTSTScaler class
      
      * add output_attentions arg to test function
      
      * format
      
      * Update doc with image patchtst.md
      
      * fix-copies
      
      * rename Forecast <-> Prediction
      
      * change name of a few parameters to match with PatchTSMixer.
      
      * Remove *ForForecasting class to match with other time series models.
      
      * make style
      
      * Remove PatchTSTForForecasting in the test
      
      * remove PatchTSTForForecastingOutput class
      
      * change test_forecast_head to test_prediction_head
      
      * style
      
      * fix docs
      
      * fix tests
      
      * change num_labels to num_targets
      
      * Remove PatchTSTTranspose
      
      * remove arguments in PatchTSTMeanScaler
      
      * remove arguments in PatchTSTStdScaler
      
      * add config as an argument to all the scaler classes
      
      * reformat
      
      * Add norm_eps for batchnorm and layernorm
      
      * reformat.
      
      * reformat
      
      * edit docstring
      
      * update docstring
      
      * change variable name pooling to pooling_type
      
      * fix output_hidden_states as tuple
      
      * fix bug when calling PatchTSTBatchNorm
      
      * change stride to patch_stride
      
      * create PatchTSTPositionalEncoding class and restructure the PatchTSTEncoder
      
      * formatting
      
      * initialize scalers with configs
      
      * edit output_hidden_states
      
      * style
      
      * fix forecast_mask_patches doc string
      
      ---------
      Co-authored-by: default avatarGift Sinthong <gift.sinthong@ibm.com>
      Co-authored-by: default avatarNam Nguyen <namctin@gmail.com>
      Co-authored-by: default avatarVijay Ekambaram <vijaykr.e@gmail.com>
      Co-authored-by: default avatarNgoc Diep Do <55230119+diepi@users.noreply.github.com>
      Co-authored-by: default avatarWesley Gifford <79663411+wgifford@users.noreply.github.com>
      Co-authored-by: default avatarWesley M. Gifford <wmgifford@us.ibm.com>
      Co-authored-by: default avatarnnguyen <nnguyen@us.ibm.com>
      Co-authored-by: default avatarNgoc Diep Do <diiepy@gmail.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2ac5b932
    • NielsRogge's avatar
      Add DINOv2 depth estimation (#26092) · 2422c38d
      NielsRogge authored
      
      
      * First draft
      
      * Fix style
      
      * More improvements
      
      * Fix tests
      
      * Fix tests
      
      * Convert checkpoint
      
      * Improve DPTImageProcessor
      
      * Remove scripts, improve conversion script
      
      * Remove print statements
      
      * Fix test
      
      * Improve docstring
      
      * More improvements
      
      * Fix style
      
      * Fix image processor
      
      * Add tests
      
      * Address comments
      
      * Address comments
      
      * Make bias backwards compatible
      
      * Address comment
      
      * Address comment
      
      * Address comment
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Address comments
      
      * Add flag
      
      * Add tests
      
      * Make tests smaller
      
      * Use regular BackboneOutput
      
      * Fix all tests
      
      * Update test
      
      * Convert more checkpoints
      
      * Convert giant checkpoints, add integration test
      
      * Rename size_divisibility to size_divisor
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      2422c38d
    • Lysandre Debut's avatar
      Fix `from_pt` flag when loading with `safetensors` (#27394) · 68ae3be7
      Lysandre Debut authored
      * Fix
      
      * Tests
      
      * Fix
      68ae3be7
    • Arthur's avatar
      Remove-auth-token (#27060) · b97cab7e
      Arthur authored
      * don't use `use_auth_token`internally
      
      * let's use token everywhere
      
      * fixup
      b97cab7e
  3. 10 Nov, 2023 2 commits
    • Susnato Dhar's avatar
      Add Phi-1 and Phi-1_5 (#26170) · e1c3ac25
      Susnato Dhar authored
      * only dir not even init
      
      * init
      
      * tokenizer removed and reference of codegen added
      
      * modeling file updated a lot remaining app_rotary_emb
      
      * conversion script done
      
      * conversion script fixed, a lot of factoring done and most tests pass
      
      * added token_clf and extractive_QA_head
      
      * integration tests pass
      
      * flash attn tests pass!
      
      * config done
      
      * more docs in modeling file
      
      * some style fix
      
      * style and others
      
      * doc test error fix
      
      * more doc fix
      
      * some attention fixes
      
      * most fixes
      
      * style and other fixes
      
      * docs fix and config
      
      * doc fix
      
      * some comments
      
      * conversion script updated
      
      * conversion script updated
      
      * Revert "conversion script updated"
      
      This reverts commit e92378c54084ec0747041b113083d1746ecb6c7f.
      
      * final comments
      
      * add Phi to language_modeling.md
      
      * edit phi.md file
      
      * rebase and fix
      
      * removed phi-1.5 example
      
      * changed model_type from 'phi'->'mixformer-sequential'
      
      * small change
      
      * small change
      
      * revert \small change
      
      * changed mixformer-sequential->phi
      
      * small change
      
      * added phi-1.5 example instead of phi-1
      
      * doc test might pass now
      
      * rebase and small change
      
      * added the dropout layer
      
      * more fixes
      
      * modified .md file
      
      * very very small doc change
      e1c3ac25
    • Susnato Dhar's avatar
      Add CLVP (#24745) · 7e9f10ac
      Susnato Dhar authored
      * init commit
      
      * attention arch done except rotary emb
      
      * rotary emb done
      
      * text encoder working
      
      * outputs matching
      
      * arch first pass done
      
      * make commands done, tests and docs remaining
      
      * all tests passed, only docs remaining
      
      * docs done
      
      * doc-builder fix
      
      * convert script removed(not relevant)
      
      * minor comments done
      
      * added ckpt conversion script
      
      * tokenizer done
      
      * very minor fix of index.md 2
      
      * mostly make fixup related
      
      * all done except fe and rotary emb
      
      * very small change
      
      * removed unidecode dependency
      
      * style changes
      
      * tokenizer removed require_backends
      
      * added require_inflect to tokenizer tests
      
      * removed VOCAB_FILES in tokenizer test
      
      * inflect dependency removed
      
      * added rotary pos emb cache and simplified the apply method
      
      * style
      
      * little doc change
      
      * more comments
      
      * feature extractor added
      
      * added processor
      
      * auto-regressive config added
      
      * added CLVPConditioningEncoder
      
      * comments done except the test one
      
      * weights added successfull(NOT tested)
      
      * tokenizer fix with numbers
      
      * generate outputs matching
      
      * almost tests passing Integ tests not written
      
      * Integ tests added
      
      * major CUDA error fixed
      
      * docs done
      
      * rebase and multiple fixes
      
      * fixed rebase overwrites
      
      * generate code simplified and tests for AutoRegressive model added
      
      * minor changes
      
      * refectored gpt2 code in clvp file
      
      * weights done and all code refactored
      
      * mostly done except the fast_tokenizer
      
      * doc test fix
      
      * config file's doc fixes
      
      * more config fix
      
      * more comments
      
      * tokenizer comments mostly done
      
      * modeling file mostly refactored and can load modules
      
      * ClvpEncoder tested
      
      * ClvpDecoder, ClvpModel and ClvpForCausalLM tested
      
      * integration and all tests passed
      
      * more fixes
      
      * docs almost done
      
      * ckpt conversion refectored
      
      * style and some failing tests fix
      
      * comments
      
      * temporary output fix but test_assisted_decoding_matches_greedy_search test fails
      
      * majority changes done
      
      * use_cache outputs same now! Along with the asisted_greedy_decoding test fix
      
      * more comments
      
      * more comments
      
      * prepare_inputs_for_generation fixed and _prepare_model_inputs added
      
      * style fix
      
      * clvp.md change
      
      * moved clvpconditionalencoder norms
      
      * add model to new index
      
      * added tokenizer input_ids_with_special_tokens
      
      * small fix
      
      * config mostly done
      
      * added config-tester and changed conversion script
      
      * more comments
      
      * comments
      
      * style fix
      
      * some comments
      
      * tokenizer changed back to prev state
      
      * small commnets
      
      * added output hidden states for the main model
      
      * style fix
      
      * comments
      
      * small change
      
      * revert small change
      
      * .
      
      * Update clvp.md
      
      * Update test_modeling_clvp.py
      
      * :)
      
      * some minor change
      
      * new fixes
      
      * remove to_dict from FE
      7e9f10ac
  4. 09 Nov, 2023 5 commits
  5. 08 Nov, 2023 4 commits
  6. 07 Nov, 2023 1 commit
  7. 06 Nov, 2023 1 commit
  8. 03 Nov, 2023 2 commits
  9. 02 Nov, 2023 3 commits
  10. 01 Nov, 2023 4 commits
  11. 31 Oct, 2023 3 commits
  12. 30 Oct, 2023 3 commits
    • Younes Belkada's avatar
      [`core`/ `GC` / `tests`] Stronger GC tests (#27124) · f7ea959b
      Younes Belkada authored
      
      
      * stronger GC tests
      
      * better tests and skip failing tests
      
      * break down into 3 sub-tests
      
      * break down into 3 sub-tests
      
      * refactor a bit
      
      * more refactor
      
      * fix
      
      * last nit
      
      * credits contrib and suggestions
      
      * credits contrib and suggestions
      
      ---------
      Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      f7ea959b
    • Yih-Dar's avatar
      Fix some tests using `"common_voice"` (#27147) · 57699496
      Yih-Dar authored
      
      
      * Use mozilla-foundation/common_voice_11_0
      
      * Update expected values
      
      * Update expected values
      
      * For test_word_time_stamp_integration
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      57699496
    • Yih-Dar's avatar
      Add `Kosmos-2` model (#24709) · 691fd8fd
      Yih-Dar authored
      
      
      * Add KOSMOS-2 model
      
      * update
      
      * update
      
      * update
      
      * address review comment - 001
      
      * address review comment - 002
      
      * address review comment - 003
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix
      
      * address review comment - 004
      
      * address review comment - 005
      
      * address review comment - 006
      
      * address review comment - 007
      
      * address review comment - 008
      
      * address review comment - 009
      
      * address review comment - 010
      
      * address review comment - 011
      
      * update readme
      
      * fix
      
      * fix
      
      * fix
      
      * [skip ci] fix
      
      * revert the change in _decode
      
      * fix docstring
      
      * fix docstring
      
      * Update docs/source/en/model_doc/kosmos-2.md
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * no more Kosmos2Tokenizer
      
      * style
      
      * remove "returned when being computed by the model"
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * UTM5 Atten
      
      * fix attn mask
      
      * use present_key_value_states instead of next_decoder_cache
      
      * style
      
      * conversion scripts
      
      * conversion scripts
      
      * conversion scripts
      
      * Add _reorder_cache
      
      * fix doctest and copies
      
      * rename 1
      
      * rename 2
      
      * rename 3
      
      * make fixup
      
      * fix table
      
      * fix docstring
      
      * rename 4
      
      * change repo_id
      
      * remove tip
      
      * update md file
      
      * make style
      
      * update md file
      
      * put docs/source/en/model_doc/kosmos-2.md to slow
      
      * update conversion script
      
      * Use CLIPImageProcessor in Kosmos2Processor
      
      * Remove Kosmos2ImageProcessor
      
      * Remove to_dict in Kosmos2Config
      
      * Remove files
      
      * fix import
      
      * Update conversion
      
      * normalized=False
      
      * Not using hardcoded values like 
      
      * No assert
      
      * No nested functions
      
      * Fix md file
      
      * copy
      
      * update doc
      
      * fix docstring
      
      * fix name
      
      * Remove _add_remove_spaces_around_tag_tokens
      
      * Remove dummy docstring of _preprocess_single_example
      
      * Use `BatchEncoding`
      
      * temp
      
      * temp
      
      * temp
      
      * Update
      
      * Update
      
      * Make Kosmos2ProcessorTest a bit pretty
      
      * Update gradient checkpointing
      
      * Fix gradient checkpointing test
      
      * Remove one liner remove_special_fields
      
      * Simplify conversion script
      
      * fix add_eos_token
      
      * update readme
      
      * update tests
      
      * Change to microsoft/kosmos-2-patch14-224
      
      * style
      
      * Fix doc
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      691fd8fd
  13. 27 Oct, 2023 2 commits
    • Patrick von Platen's avatar
      [Attention Mask] Refactor all encoder-decoder attention mask (#27086) · ac589375
      Patrick von Platen authored
      
      
      * [FA2 Bart] Add FA2 to all Bart-like
      
      * better
      
      * Refactor attention mask
      
      * remove all customized atteniton logic
      
      * format
      
      * mass rename
      
      * replace _expand_mask
      
      * replace _expand_mask
      
      * mass rename
      
      * add pt files
      
      * mass replace & rename
      
      * mass replace & rename
      
      * mass replace & rename
      
      * mass replace & rename
      
      * Update src/transformers/models/idefics/modeling_idefics.py
      
      * fix more
      
      * clean more
      
      * fix more
      
      * make style
      
      * fix again
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * small fix mistral
      
      * finish
      
      * finish
      
      * finish
      
      * finish
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      ac589375
    • Isaac Chung's avatar
      Add early stopping for Bark generation via logits processor (#26675) · e2bffcfa
      Isaac Chung authored
      * add early stopping logits processor
      
      * black formmated
      
      * indent
      
      * follow method signature
      
      * actual logic
      
      * check for None
      
      * address comments on docstrings and method signature
      
      * add unit test under `LogitsProcessorTest` wip
      
      * unit test passing
      
      * black formatted
      
      * condition per sample
      
      * add to BarkModelIntegrationTests
      
      * wip BarkSemanticModelTest
      
      * rename and add to kwargs handling
      
      * not add to BarkSemanticModelTest
      
      * correct logic and assert last outputs tokens different in test
      
      * doc-builder style
      
      * read from kwargs as well
      
      * assert len of with less than that of without
      
      * ruff
      
      * add back seed and test case
      
      * add original impl default suggestion
      
      * doc-builder
      
      * rename and use softmax
      
      * switch back to LogitsProcessor and update docs wording
      
      * camelCase and spelling and saving compute
      
      * assert strictly less than
      
      * assert less than
      
      * expand test_generate_semantic_early_stop instead
      e2bffcfa
  14. 25 Oct, 2023 1 commit
  15. 24 Oct, 2023 2 commits
    • JB (Don)'s avatar
      Add a default decoder_attention_mask for EncoderDecoderModel during training (#26752) · a0fd3448
      JB (Don) authored
      * Add a default decoder_attention_mask for EncoderDecoderModel during training
      
      Since we are already creating the default decoder_input_ids from the labels, we should also
      create a default decoder_attention_mask to go with it.
      
      * Fix test constant that relied on manual_seed()
      
      The test was changed to use a decoder_attention_mask that ignores padding instead (which is
      the default one created by BERT when attention_mask is None).
      
      * Create the decoder_attention_mask using decoder_input_ids instead of labels
      
      * Fix formatting in test
      a0fd3448
    • Alex McKinney's avatar
      Device agnostic testing (#25870) · 9da45171
      Alex McKinney authored
      
      
      * adds agnostic decorators and availability fns
      
      * renaming decorators and fixing imports
      
      * updating some representative example tests
      bloom, opt, and reformer for now
      
      * wip device agnostic functions
      
      * lru cache to device checking functions
      
      * adds `TRANSFORMERS_TEST_DEVICE_SPEC`
      if present, imports the target file and updates device to function
      mappings
      
      * comments `TRANSFORMERS_TEST_DEVICE_SPEC` code
      
      * extra checks on device name
      
      * `make style; make quality`
      
      * updates default functions for agnostic calls
      
      * applies suggestions from review
      
      * adds `is_torch_available` guard
      
      * Add spec file to docs, rename function dispatch names to backend_*
      
      * add backend import to docs example for spec file
      
      * change instances of  to
      
      * Move register backend to before device check as per @statelesshz changes
      
      * make style
      
      * make opt test require fp16 to run
      
      ---------
      Co-authored-by: default avatararsalanu <arsalanu@graphcore.ai>
      Co-authored-by: default avatararsalanu <hzji210@gmail.com>
      9da45171
  16. 23 Oct, 2023 1 commit