"tests/test_tokenization_realm.py" did not exist on "0ffc8eaf53542092271a208a52e881668e753e72"
  1. 29 Mar, 2023 1 commit
  2. 28 Mar, 2023 1 commit
  3. 27 Mar, 2023 1 commit
    • Arthur's avatar
      [WIP]`NLLB-MoE` Adds the moe model (#22024) · 19ade242
      Arthur authored
      * Initial commit
      
      * update modeling code
      
      * update doc
      
      * add functions necessary
      
      * fix impotrs
      
      * revert changes
      
      * fixup
      
      * more styling to get going
      
      * remove standalone encoder
      
      * update code
      
      * styling
      
      * fix config and model
      
      * update code and some refactoring
      
      * make more tests pass
      
      * Adding NLLB-200 - MoE - 54.5B for no language left behind
      Fixes #21300
      
      * fix mor common tests
      
      * styke
      
      * update testing file
      
      * update
      
      * update
      
      * Router2 doc
      
      * update check config with sparse layer
      
      * add dummy router
      
      * update current conversion script
      
      * create on the fly conversion script
      
      * Fixup
      
      * style
      
      * style 2
      
      * fix empty return
      
      * fix return
      
      * Update default config sparse layers
      
      * easier to create sparse layers
      
      * update
      
      * update conversion script
      
      * update modeling
      
      * add to toctree
      
      * styling
      
      * make ruff happy
      
      * update docstring
      
      * update conversion script
      
      * update, will break tests but impelemting top2
      
      * update
      
      * local groups are supported here
      
      * ️ Support for local groups is now removed ️
      
      This is because it has to work with model parallelism that we do not support
      
      * finish simplificaiton
      
      * Fix forward
      
      * style
      
      * fixup
      
      * Update modelling and test, refactoring
      
      * update tests
      
      * remove final layer)norm as it is done in the FF
      
      * routing works! Logits test added
      
      * nit in test
      
      * remove top1router
      
      * style
      
      * make sure sparse are tested. Had to change route_tokens a liottle bit
      
      * add support for unslip models when converting
      
      * fixup
      
      * style
      
      * update test s
      
      * update test
      
      * REFACTOR
      
      * encoder outputs match!
      
      * style
      
      * update testing
      
      * 🎉encoder and decoder logits match 🎉
      
      
      
      * styleing
      
      * update tests
      
      * cleanup tests
      
      * fix router test and CIs
      
      * cleanup
      
      * cleanup test styling
      
      * fix tests
      
      * Finally the generation tests match!
      
      * cleanup
      
      * update test
      
      * style testing file
      
      * remove script
      
      * cleanup
      
      * more cleanup
      
      * nits
      
      * update
      
      * NLLB tokenizer is wrong and will be fixed soon
      
      * use LongTensors
      
      * update tests
      
      * revert some small changes
      
      * fix second expert sampling and batch prioritized routing
      
      * update tests
      
      * finish last tests
      
      * make ruff happy
      
      * update
      
      * ruff again
      
      * style
      
      * Update docs/source/en/model_doc/nllb-moe.mdx
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Updates based on review
      
      * style and fix import issue
      
      * nit
      
      * more nits
      
      * cleanup
      
      * styling
      
      * update test_seconde_expert_policy
      
      * fix name
      
      * last nit on the markdown examples
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      19ade242
  4. 23 Mar, 2023 1 commit
  5. 22 Mar, 2023 2 commits
  6. 21 Mar, 2023 2 commits
  7. 17 Mar, 2023 1 commit
  8. 16 Mar, 2023 2 commits
    • Yih-Dar's avatar
      Update tiny model creation script (#22202) · 4c5c0af7
      Yih-Dar authored
      
      
      * Update UNCONVERTIBLE_MODEL_ARCHITECTURES
      
      * Deal with 2 model tester classes in single test file
      
      * Deal with 2 model tester classes in single test file
      
      * Deal with 2 model tester classes in single test file
      
      * make style and quality
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      4c5c0af7
    • Jason Phang's avatar
      LLaMA Implementation (#21955) · 0041be5b
      Jason Phang authored
      
      
      * LLaMA
      
      * sharding and docs
      
      * tweak
      
      * black
      
      * inits
      
      * ruff
      
      * LLAMA_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * init
      
      * no checkpoint
      
      * docs
      
      * ruff
      
      * type_vocab_size
      
      * tokenizer fixes
      
      * tokenizer fixes
      
      * Update tokenization_llama.py
      
      * Update tokenization_llama.py
      
      * Update configuration_llama.py
      
      * Update modeling_llama.py
      
      * tokenizer add_bos by default
      
      * licenses
      
      * remove decoder
      
      * norms and mlp
      
      * rope overhaul
      
      * tweaks
      
      * black
      
      * mention OPT implementation
      
      * off-by-one naming
      
      * typo
      
      * fix
      
      * tokenization fix and slicing bug
      
      * padding config
      
      * cleanup
      
      * black
      
      * update tests
      
      * undo typo
      
      * fix vocab caching logic
      
      * ruff
      
      * docbuilder
      
      * attn fix from BlackSamorez
      
      * initial feedback
      
      * typo
      
      * docs
      
      * llama case
      
      * llama case
      
      * load checkpoint docs
      
      * comment about tokenizer
      
      * tokenizer defaults
      
      * clear past_key_values if use_cache=False
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      ---------
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      0041be5b
  9. 14 Mar, 2023 1 commit
  10. 13 Mar, 2023 3 commits
    • Yih-Dar's avatar
      Add a new script to check model testers' config (#22063) · 54ee56b1
      Yih-Dar authored
      
      
      * Add script
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      54ee56b1
    • wangpeng's avatar
      add new model of MGP-STR (#21418) · 102b5ff4
      wangpeng authored
      
      
      * add new model of MGP-STR
      
      * fix the check failings
      
      * remove torch and numpy from mgp_tokenization
      
      * remove unused import from modeling_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str.py
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str and add softmax outs to model
      
      * rm test_processing_mgp_str and add softmax outs to model
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * add new model of MGP-STR
      
      * fix the check failings
      
      * remove torch and numpy from mgp_tokenization
      
      * remove unused import from modeling_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str.py
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str and add softmax outs to model
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * remove representation_size from MGPSTRConfig
      
      * reformat configuration_mgp_str.py
      
      * format test_processor_mgp_str.py
      
      * add test for tokenizer and complete model/processer test and model file
      
      * rm Unnecessary tupple in modeling_mgp_str
      
      * reduce hidden_size/layers/label_size in test_model
      
      * add integration tests and change MGPSTR to Mgpstr
      
      * add test for logit values
      
      * reformat test model file
      
      ---------
      Co-authored-by: default avataryue kun <yuekun.wp@alibaba-inc.com>
      102b5ff4
    • Alara Dirik's avatar
      Add AutoModelForZeroShotImageClassification (#22087) · 32e3466d
      Alara Dirik authored
      Adds AutoModelForZeroShotImageClassification to transformers
      32e3466d
  11. 09 Mar, 2023 2 commits
  12. 08 Mar, 2023 1 commit
  13. 07 Mar, 2023 3 commits
    • Yih-Dar's avatar
      Update tiny model creation script and some others files (#22006) · b338414e
      Yih-Dar authored
      
      
      * Update 1
      
      * Update 2
      
      * Update 3
      
      * Update 4
      
      * Update 5
      
      * Update 6
      
      * Update 7
      
      * Update 8
      
      * Update 9
      
      * Update 10
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      b338414e
    • Eli Simhayev's avatar
      [Time-Series] informer model (#21099) · 8abe4930
      Eli Simhayev authored
      * added informer to gitignore
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * moved enc-dec init to InformerEncoder/Decoder init
      
      * added 'init_std' to config, now model init works!
      
      * WIP conversion script, and added code sources
      
      * WIP conversion script: loading original informer pth works
      
      * WIP conversion script: change defaults in the config
      
      * WIP conversion script: supporting Informer input embedding
      
      * WIP conversion script: added parameters for the informer embed
      
      * WIP conversion script: change dim_feedforward=2048
      
      * WIP conversion script: remove unused args for loading checkpoint
      
      * just cleaning up
      
      * DataEmbedding removed, after thinking with Kashif
      
      * working on forward pass
      
      * WIP forward pass: trying to establish working batch for forward pass
      
      * cleaning and finalizing
      
      * adding HF names and docs
      
      * init after cleaning works
      
      * WIP in tests
      
      * added docs for the informer specific args
      
      * fix style
      
      * undo change
      
      * cleaning informer, now need to work only enc-dec
      
      * initial enc-dec classes
      
      * added encoder and decoder
      
      * added todo
      
      * add todos for conv_layers
      
      * added decoder docs from vanilla
      
      * added encoder docs from vanilla
      
      * remove encoder decoder from the original informer
      
      * removed AttentionLayer from the original paper
      
      * removed TriangularCausalMask, same as decoder_attention_mask
      
      * initial sparse attention
      
      * use conv_layers
      
      * fixed test_config test
      
      * fix parenthesis when itearting zip(layers, conv_layers)
      
      * error found in prob attention, added sizes as comments
      
      * fix sizes
      
      * added proposal for q_reduce indexing, and remove unused
      
      * WIP ProbMask, and changed factor=2 for testing
      
      * remove unused libs for this PR for creating the env
      
      * fix checking the attn_weights.size() after bmm
      
      * Q_reduce: changed from torch.gather to simple slicing
      
      * WIP calculate final attn_output
      
      * finish adding v_aggregated, attn_output ready
      
      * changed tgt_len to u in attention_mask, need to fix the size error
      
      * comment attention_mask for encoder, and fix if cond for v_agg
      
      * added ProbMask support (wip), removed old original code
      
      * finished ProbMask 😃
      
      
      
      * Revert "remove unused libs for this PR for creating the env"
      
      This reverts commit 11a081e09e92771e51a5d2758d53a9afb59547f0.
      
      * fixes
      
      * make style
      
      * fix initial tests
      
      * fix more tests
      
      * dry
      
      * make style
      
      * remove unused files
      
      * style
      
      * added integration tests
      
      * fix num_static_real_features
      
      * fix header
      
      * remove unused function
      
      * fix example
      
      * fix docs
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/modeling_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixes for reviewer
      
      * use prediction_length from model
      
      * fix style
      
      * fixed informer.mdx
      
      * added to index
      
      * updated readme
      
      * undo
      
      * make fix-copies
      
      * typo
      
      * fix copy
      
      * added Informer to toctree
      
      * in order
      
      * fixed comments
      
      * remove unneeded new lines in docs
      
      * make static real and cat optional
      
      * fix use of distil conv layers
      
      * fixed integration test
      
      * added checkpoint for convlayer
      
      * make fix-copies
      
      * updated from time series model
      
      * make fix-copies
      
      * copy decoder
      
      * fix unit tests
      
      * updated scaling config
      
      * fix integration tests
      
      * IGNORE_NON_TESTED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * updated check configs
      
      * fix formatting
      
      * undo change from time series
      
      * prediction_length should not be None
      
      * aliign with the blog: prettify ProbSparse and change attention_factor  to sampling_factor
      
      * make style
      
      * make fix-copies
      
      * niels CR: update contributed by
      
      * niels CR: update configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: update kashif -> huggingface
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: `sampling_factor` only relevant when `attention_type`=prob
      
      * make style
      
      * fixed U_part: added multiplication by `L_Q`
      
      * fixed bug: remove `is not None` from `if config.distil`
      
      * fixed test: `decoder_seq_length` to `encoder_seq_length` in cross_attentions check
      
      * fix integration tests
      
      * updated model hub
      
      * do not shift as in training
      
      * undo
      
      * fix make-copies
      
      * make fix-copies
      
      * added `if prediction_length is None`
      
      * changed `ProbSparseAttention` to `InformerProbSparseAttention`
      
      * changed `V_sum` -> `v_mean_dim_time`
      
      * changed `ConvLayer` to `InformerConvLayer` and fixed `super()`
      
      * TimeSeriesTansformer->Informer in decoder's Copied from
      
      * more descriptive in ProbSparse
      
      * make style
      
      * fix coped from
      
      * Revert "added `if prediction_length is None`"
      
      This reverts commit b4cbddfa05e3bd739b79569cd3c3b89e316f2451.
      
      * fixed indent
      
      * use InformerSinusoidalPositionalEmbedding
      
      * make fix-style
      
      * fix from #21860
      
      * fix name
      
      * make fix-copies
      
      * use time series utils
      
      * fix dec num_heads
      
      * docstring
      
      * added time series util doc
      
      * _import_structure
      
      * formatting
      
      * changes from review
      
      * make style
      
      * fix docs
      
      * fix doc
      
      * removed NegativeLogLikelihood
      
      ---------
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      8abe4930
    • Yih-Dar's avatar
      Update `notification_service.py` (#21992) · 99c5c607
      Yih-Dar authored
      
      
      * better check
      
      * better check
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      99c5c607
  14. 03 Mar, 2023 2 commits
  15. 02 Mar, 2023 3 commits
  16. 01 Mar, 2023 3 commits
  17. 28 Feb, 2023 2 commits
    • Yih-Dar's avatar
      🔥Rework pipeline testing by removing `PipelineTestCaseMeta` 🚀 (#21516) · 871c31a6
      Yih-Dar authored
      
      
      * Add PipelineTesterMixin
      
      * remove class PipelineTestCaseMeta
      
      * move validate_test_components
      
      * Add for ViT
      
      * Add to SPECIAL_MODULE_TO_TEST_MAP
      
      * style and quality
      
      * Add feature-extraction
      
      * update
      
      * raise instead of skip
      
      * add tiny_model_summary.json
      
      * more explicit
      
      * skip tasks not in mapping
      
      * add availability check
      
      * Add Copyright
      
      * A way to diable irrelevant tests
      
      * update with main
      
      * remove disable_irrelevant_tests
      
      * skip tests
      
      * better skip message
      
      * better skip message
      
      * Add all pipeline task tests
      
      * revert
      
      * Import PipelineTesterMixin
      
      * subclass test classes with PipelineTesterMixin
      
      * Add pipieline_model_mapping
      
      * Fix import after adding pipieline_model_mapping
      
      * Fix style and quality after adding pipieline_model_mapping
      
      * Fix one more import after adding pipieline_model_mapping
      
      * Fix style and quality after adding pipieline_model_mapping
      
      * Fix test issues
      
      * Fix import requirements
      
      * Fix mapping for MobileViTModelTest
      
      * Update
      
      * Better skip message
      
      * pipieline_model_mapping could not be None
      
      * Remove some PipelineTesterMixin
      
      * Fix typo
      
      * revert tests_fetcher.py
      
      * update
      
      * rename
      
      * revert
      
      * Remove PipelineTestCaseMeta from ZeroShotAudioClassificationPipelineTests
      
      * style and quality
      
      * test fetcher for all pipeline/model tests
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      871c31a6
    • Yih-Dar's avatar
      Make Slack CI reporting stronger (#21823) · aab895c3
      Yih-Dar authored
      
      
      * Use token
      
      * Avoid failure
      
      * better error
      
      * Fix
      
      * fix style
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      aab895c3
  18. 22 Feb, 2023 2 commits
  19. 21 Feb, 2023 1 commit
  20. 20 Feb, 2023 2 commits
    • tanreinama's avatar
      add GPTSAN model (reopen) (#21291) · f56174ac
      tanreinama authored
      * add GPTSAN-Japanese
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN (update for review)
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * fix typo in comment text
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * fix document and comments
      
      * fix class name GPTSAN->GPTSan
      
      * fix import and test for tokenizer
      f56174ac
    • Andy Ehrenberg's avatar
      add flax whisper implementation (#20479) · 2840272c
      Andy Ehrenberg authored
      
      
      * add flax whisper implementation
      
      * rever change to setup
      
      * remove unused imports
      
      * revert generation changes
      
      * flax whisper docs
      
      * docs
      
      * import order
      
      * import sorting
      
      * isort
      
      * add dummy objects
      
      * doc formatting
      
      * formatting
      
      * remove trailing whitespaces
      
      * fix flax whisper docs
      
      * add generation logic to unlock flax whisper
      
      * remove scans
      
      * give credits to Flax Bart implementation
      
      * remove unused imports
      
      * add license
      
      * remove assert
      
      * more credits to Bart
      
      * fix style
      
      * formatting
      
      * support left padding
      
      * add flax whisper generation test
      
      * remove copied from comments whenever not a full copy
      
      * fix docstrings for logits processors
      
      * revert change to FlaxForceTokensLogitsProcessor
      
      * revert doc changes
      
      * improve generation docs
      
      * reorganize
      
      * formatting
      
      * cleanup docs
      
      * add tests
      
      * handle empty list case
      
      * fix forced decoder ids in flax tests
      
      * add flax whisper to inits
      
      * upate dummy objects
      
      * docs for FlaxAutoModelForSpeechSeq2Seq
      
      * fix decoder_position_ids computation in pretrained model decode/__call__ fns
      
      * add Copied from statements as necessary
      
      * compute position_ids only in __call__ and decode methods of pretrained model subclasses
      
      * improve readabilityof compute positional embeddings
      
      * check dimensionality of input_features instead of hidden_states
      
      * copied from statement for init_cache
      
      * formatting
      
      * fix copies
      
      * fix copies
      
      * pass attention mask to encoder layers
      
      * fix decoder module outputs
      
      * set dtype
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * smaller flax model for whisper test
      
      * Update src/transformers/generation/flax_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update tests/models/whisper/test_modeling_flax_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * cleanup
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * bias cleanup
      
      * doc fix
      
      * align style for force tokens processor
      
      * readability
      
      * fix input shape in tests
      
      * revert FlaxGenerationMixin docstring
      
      * formatting
      
      * fix tests
      
      * fix imports
      
      * consistent encoder hidden states
      
      * consistent hidden states
      
      * input shapes
      
      * typo
      
      * partial class trick
      
      * partial class for input shape
      
      * base_class with correct input shape
      
      * partial base classes
      
      * match by name
      
      * set main_input_name
      
      * compare on names
      
      * formatting
      
      * remove unused import
      
      * safer position ids computation
      
      * safer position id computation
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/modeling_flax_whisper.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * remove identical inherited tests
      
      * fix prompt ids in tests
      
      * use generation config
      
      * use jnp array
      
      * better var names
      
      * more explicit bias use
      
      * import transformers
      
      * formatting
      
      * test formatting
      
      * remove unused imports
      
      * remove unused imports
      
      * formatting
      
      * isort
      
      * docs
      
      * fix ln orders for encoder hidden states
      
      * whisper unique generation stuff
      
      * flake
      
      * use finfo for attention bias
      
      * docs
      
      * Update src/transformers/generation/flax_utils.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * docs
      
      * add timestamp flax test
      
      * jit for timestamps
      
      * formatting
      
      * clean up timestamps processor
      
      * formatting
      
      * remove if_true
      
      * cleanup
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      2840272c
  21. 16 Feb, 2023 2 commits
    • Arthur's avatar
      [CLAP] Add CLAP to the library (#21370) · c236a621
      Arthur authored
      
      
      * add model like clip
      
      * update
      
      * text model ok
      
      * clap text works
      
      * some refactor
      
      - `CLAPVision` to `CLAPAudio`
      - refactor kwargs of audio modules
      
      * more refactor
      
      * more refactor
      
      * more refactor
      
      * correct fusion
      
      * more refactor
      
      * new modules
      
      * add basic processor
      
      * fixup
      
      * remove whisper copioed from
      
      * audio logits match
      
      * add doc
      
      * correct filters mel and add maxlength
      
      * style
      
      * few fixes
      
      * forward passes
      
      * fixup
      
      * fixup
      
      * some clean up
      
      * remove mels form the dictionnary
      
      * pad after the repeat
      
      * update padding when dsmaller
      
      * fix padding
      
      * style
      
      * use swin patch merging
      
      * use copied from swin
      
      * processor with any tokenizer
      
      * more copied from
      
      * some clean up
      
      * more refactor
      
      * fix mel when rand_trunc
      
      * style
      
      * remove unused imports
      
      * update processing
      
      * remove image processing tests
      
      * add testing fiel
      
      * fixmodeling issues
      
      * replace with `is_longer`
      
      * clap in serialization
      
      * more refactor
      
      * `make fixup`
      
      * make fixup
      
      * fix feature extractor
      
      * update test feature extractor
      
      * `make fixup`
      
      * clean up config
      
      * more clean up
      
      * more cleanup
      
      * update tests
      
      * refactor tests and inits
      
      * removeCLAP vision config
      
      * remove CLAP from image procssing auto and dummy vision objects
      
      * update inits
      
      * style
      
      * re order classes in modeling clap
      
      * Use roberta tokenizer as the other weights are not open sourced
      
      * small cleaup
      
      * remove tokenization CLAP
      
      * processor tokenizr is roberta
      
      * update feature extraction doc
      
      * remove vclap from model zero shot
      
      * update f_min and f_max to frequency_xx
      
      * some changes
      
      - fix modeling keys
      - add `is_longer` in the forward pass
      - make fixup
      
      * make fixup
      
      * consistent behavior ebtween rand_crop and fusion
      
      * add numpy resize and bilinear and documentation
      
      * move resizing to image utils
      
      * clean feature extraction
      
      * import resize from correct file
      
      * resize in image transforms
      
      * update
      
      * style
      
      * style
      
      * nit
      
      * remove unused arguments form the feature extractor
      
      * style
      
      * few fixes + make fixup
      
      * oops
      
      * fix more tests
      
      * add zero shot audio classification pipeline
      
      * update zeroshot classification pipeline
      
      * fixup
      
      * fix copies
      
      * all CI tests pass
      
      * make fixup + fix docs
      
      * fix docs
      
      * fix docs
      
      * update tests pip;eline
      
      * update zero shot pipeline
      
      * update feature extraction clap
      
      * update tokenization auto
      
      * use nested simplify
      
      * update pipeline tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * split in two lines
      
      * fixes
      
      * refactor
      
      * clean up
      
      * add integration tests
      
      * update config docstring
      
      * style
      
      * update processor
      
      * fix processor test
      
      * fix feat extractor tests
      
      * update docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix readmes
      
      * fix tips
      
      * Update src/transformers/models/auto/configuration_auto.py
      
      * update doc and remove todo -> properly explained
      
      * fix idx and typo
      
      * typoe
      
      * cleanup config
      
      * cleanup tests, styles and doc
      
      * ignore docstyle on image transform
      
      * add conversion script
      
      * remove the `clap` indx in favor of `CLAP`
      
      * update __init
      
      * nits
      
      * Update src/transformers/pipelines/__init__.py
      
      * fix bug
      
      * clarifiy config
      
      * fix copy
      
      * fix init
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix model output
      
      * fix comment
      
      * make fixup
      
      * make fixup
      
      * rename to `Clap`
      
      * replace to `Clap`
      
      * replace to `Clap`
      
      * repo consistency
      
      * again repo-consistency
      
      * make fixup
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * add config
      
      * changes
      
      * update conversion
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * remove unused function
      
      * update based on code reviews
      
      * style
      
      * more comments
      
      * cleanup
      
      * clean up
      
      * style
      
      * apply suggestions
      
      * Empty commit
      
      * pipeline will be added in a different PR
      
      * update calls to audio utils functions
      
      * update pipeline init
      
      * style
      
      * style
      
      * styling again
      
      * use pad
      
      * fix repo-consistency
      
      * update utils and add doc for audio utils
      
      * clean up resize by using torch. update inits accordingly
      
      * style
      
      * CLap's  tokenizer is RobertA
      
      * add audio utils to internal toctreee
      
      * update totctree
      
      * style
      
      * update documentation and normalize naming accross audio utils and feature extraction clap
      
      * style
      
      * clean up
      
      * update doc and typos
      
      * fix doctest
      
      * update modelin code, got rid of a lot of reshaping
      
      * style on added doc audio utils
      
      * update modeling clap
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * docstringvariables with CLAP
      
      * rename key
      
      * update modeling CLAP
      
      * update audio utils docstring
      
      * update processing clap
      
      * fix readmes
      
      * fix toctree
      
      * udpate configuration clap
      
      * fix init
      
      * make fixup
      
      * fix
      
      * fix
      
      * update naming
      
      * update
      
      * update checkpoint path
      
      * Apply suggestions from code review
      
      * Major refactoring
      
      * Update src/transformers/models/clap/configuration_clap.py
      
      * merge
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      c236a621
    • Connor Henderson's avatar
      refactor: Make direct_transformers_import util (#21652) · 0f96c26d
      Connor Henderson authored
      * refactor: Make direct_import util
      
      * edit direct import fn
      
      * add docstring
      
      * make import function specific to transformers only
      
      * edit doc string
      0f96c26d
  22. 15 Feb, 2023 2 commits