1. 16 Jan, 2023 2 commits
    • Alara Dirik's avatar
    • NielsRogge's avatar
      Add UperNet (#20648) · 4ed89d48
      NielsRogge authored
      
      
      * First draft
      
      * More improvements
      
      * Add convnext backbone
      
      * Add conversion script
      
      * Add more improvements
      
      * Comment out to_dict
      
      * Add to_dict method
      
      * Add default config
      
      * Fix config
      
      * Fix backbone
      
      * Fix backbone some more
      
      * Add docs, auto mapping, tests
      
      * Fix some tests
      
      * Fix more tests
      
      * Fix more tests
      
      * Add conversion script
      
      * Improve conversion script
      
      * Add support for getting reshaped undownsampled hidden states
      
      * Fix forward pass
      
      * Add print statements
      
      * Comment out set_shift_and_window_size
      
      * More improvements
      
      * Correct downsampling layers conversion
      
      * Fix style
      
      * First draft
      
      * Fix conversion script
      
      * Remove config attribute
      
      * Fix more tests
      
      * Update READMEs
      
      * Update ConvNextBackbone
      
      * Fix ConvNext tests
      
      * Align ConvNext with Swin
      
      * Remove files
      
      * Fix index
      
      * Improve docs
      
      * Add output_attentions to model forward
      
      * Add backbone mixin, improve tests
      
      * More improvements
      
      * Update init_weights
      
      * Fix interpolation of logits
      
      * Add UperNetImageProcessor
      
      * Improve image processor
      
      * Fix image processor
      
      * Remove print statements
      
      * Remove script
      
      * Update import
      
      * Add image processor tests
      
      * Remove print statements
      
      * Fix test
      
      * Add integration test
      
      * Add convnext integration test
      
      * Update docstring
      
      * Fix README
      
      * Simplify config
      
      * Apply suggestions
      
      * Improve docs
      
      * Rename class
      
      * Fix test_initialization
      
      * Fix import
      
      * Address review
      
      * Fix confg
      
      * Convert all checkpoints
      
      * Fix default backbone
      
      * Usage same processor as segformer
      
      * Apply suggestions
      
      * Fix init_weights, update conversion scripts
      
      * Improve config
      
      * Use Auto API instead of creating a new image processor
      
      * Fix docs
      
      * Add doctests
      
      * Remove ResNetConfig dependency
      
      * Add always_partition argument
      
      * Fix rebase茅
      
      * Improve docs
      
      * Convert checkpoints
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MBP.localdomain>
      4ed89d48
  2. 04 Jan, 2023 2 commits
  3. 03 Jan, 2023 1 commit
    • NielsRogge's avatar
      Add GIT (GenerativeImage2Text) (#20295) · 9c6f7485
      NielsRogge authored
      
      
      * First draft
      
      * Make model instantiation work
      
      * Fix copied from statement
      
      * More fixes
      
      * Add correct output head
      
      * Improve configuration
      
      * Add conversion script
      
      * Improve conversion script
      
      * Remove token_type_ids
      
      * Fix conversion of projection layers
      
      * Convert all weights
      
      * Use cats image
      
      * Make logits match
      
      * Generate caption on cats image
      
      * Add GITProcessor
      
      * Update conversion script
      
      * Add support for more checkpoints
      
      * Fix conversion script
      
      * Add initial tests
      
      * Remove cross-attention
      
      * More improvements
      
      * Remove is_decoder
      
      * Improve model tests
      
      * Improve tests
      
      * Improve model outputs
      
      * Fix model outputs equivalence
      
      * Fix more tests
      
      * Remove unused code
      
      * Use generate to generate text, no use of cache for now
      
      * Use generate more appropriately
      
      * Fix config tests
      
      * Fix style
      
      * Add support for use_cache
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Fix style
      
      * Fix GIT vision encoder
      
      * Update README
      
      * Fix integration test
      
      * Set bos and eos token ids
      
      * Improve docs
      
      * Improve code
      
      * Add support for provided attention_mask
      
      * Add copied from statement
      
      * Fix gradient checkpointing test
      
      * Set model_input_names
      
      * Investigate model_input_names
      
      * Remove script
      
      * Fix model inputs
      
      * Fix docstring
      
      * Rename GIT to Git
      
      * Support more models
      
      * Add support for textvqa model
      
      * Add video support
      
      * Extend conversion script for video
      
      * Add support for large variant
      
      * Add support for more models
      
      * Fix config archive map
      
      * Update integration test
      
      * Fix README
      
      * Fix CLIP mean and std
      
      * Update processor
      
      * Fix use_cache for video, thanks @gante
      
      * Remove print statements
      
      * Remove assertion
      
      * Add processor tests
      
      * Fix model_input_names
      
      * Use Auto API for processor
      
      * Fix processor tests
      
      * Fix integration test
      
      * Fix pipeline test
      
      * Make tests faster
      
      * Update conversion script
      
      * Update conversion script
      
      * Convert more checkpoints
      
      * Update conversion script
      
      * Fix typo
      
      * Update docstrings
      
      * Improve code snippets
      
      * Fix doc tests
      
      * Add more code examples茅
      
      * Fix doc tests
      
      * Add integration tests
      
      * Fix unused variable
      
      * revert
      
      * Add GIT to Japanese README
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      9c6f7485
  4. 21 Dec, 2022 1 commit
  5. 19 Dec, 2022 1 commit
  6. 16 Dec, 2022 1 commit
    • NielsRogge's avatar
      Add Swin2SR (#19784) · 26dd041c
      NielsRogge authored
      
      
      * First draft
      
      * Add more improvements
      
      * Improve forward pass
      
      * Fix layernorm
      
      * Add upscaler
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Improve conversion script
      
      * Add preprocessing
      
      * Make output match original implementation
      
      * Add additional attributes
      
      * Add support for more models
      
      * Support more models
      
      * Add support for real world sr
      
      * Add initial Swin2SRFeatureExtractor
      
      * Add ImageSuperResolutionOutput
      
      * Make more tests pass
      
      * Use BaseModelOutput
      
      * Fix one more test
      
      * Fix more tests
      
      * Fix another test
      
      * Fix all tests
      
      * Rename to Swin2SRImageProcessor
      
      * Fix toctree
      
      * Fix toctree
      
      * Fix rebase
      
      * Improve Swin2SRImageProcessor
      
      * Remove feature extractor file
      
      * Improve model
      
      * Improve conversion script
      
      * Fix integration test
      
      * Fix init
      
      * Fix conversion script
      
      * Address comments
      
      * Improve upsampler
      
      * Add NearestConvUpsampler
      
      * Improve pixel shuffle upsampler
      
      * Improve auxiliary upsampler
      
      * Improve conversion script
      
      * Rename conv_last to final_convolution
      
      * Fix rebase
      
      * Improve upsample module
      
      * Add padding to image processor
      
      * Fix bug
      
      * Update padding
      
      * Remove print statement and fix integration test
      
      * Improve docs
      
      * Add image processor tests
      
      * Convert all checkpoints, fix tests茅
      
      * Remove print statements
      
      * Fix import
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      26dd041c
  7. 12 Dec, 2022 1 commit
    • Ariel Ekgren's avatar
      Add gpt-sw3 model to transformers (#20209) · 5f94855d
      Ariel Ekgren authored
      
      
      * Add templates for gpt-sw3
      
      * Add templates for gpt-sw3
      
      * Added sentencepiece tokenizer
      
      * intermediate commit with many changes
      
      * fixed conflicts
      
      * Init commit for tokenization port
      
      * Tokenization progress
      
      * Remove fast tokenizer
      
      * Clean up and rename spm.model -> spiece.model
      
      * Remove TF -> PT conversion script template, Clean up Megatron -> PT script
      
      * Optimize encode & decode performance
      
      * added new attention
      
      * added new attention
      
      * attention for gpt-sw3 working
      
      * attention good
      
      * Cache is now working
      
      * fixed attention mask so that it works with causal attention
      
      * fixed badbmm bug for cpu and caching
      
      * updated config with correct parameters
      
      * Refactor and leave optimizations as separate functions to avoid breaking expected functionality
      
      * Fix special tokens mapping for both tokenizers
      
      * cleaning up of code and comments
      
      * HF compatible attention outputs
      
      * Tokenizer now passing tests, add documentation
      
      * Update documentation
      
      * reverted back to base implementation after checking that it is identical to pretrained model
      
      * updated gpt-sw3 config
      
      * updated conversion script
      
      * aligned parameters with gpt-sw3 config
      
      * changed default scale_attn_by_inverse_layer_idx to true
      
      * removed flag from conversion script
      
      * added temporary model path
      
      * reverted back to functioning convert script
      
      * small changes to default config
      
      * updated tests for gpt-sw3
      
      * make style, make quality, minor cleanup
      
      * Change local paths to testing online repository
      
      * Change name: GptSw3 -> GPTSw3
      
      * Remove GPTSw3TokenizerFast references
      
      * Use official model repository and add more model sizes
      
      * Added reference to 6.7b model
      
      * Add GPTSw3DoubleHeadsModel to IGNORE_NON_AUTO_CONFIGURED, like GPT2DoubleHeadsModel
      
      * Remove pointers to non-existing TFGPTSw3
      
      * Add GPTSw3 to docs/_toctree.yml
      
      * Remove TF artifacts from GPTSw3 in __init__ files
      
      * Update README:s with 'make fix-copies'
      
      * Add 20b model to archive list
      
      * Add documentation for GPT-Sw3
      
      * Fix typo in documentation for GPT-Sw3
      
      * Do 'make fix-copies' again after having updated docs
      
      * Fix some typos in docs
      
      * Update src/transformers/models/gpt_sw3/configuration_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/configuration_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/convert_megatron_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/modeling_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update tests/models/gpt_sw3/test_tokenization_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/modeling_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/modeling_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Resolve comments from PR feedback
      
      * Resolve more comments from PR feedback, also set use_cache=True in convert script
      
      * Add '# Copied from' comments for GPTSw3 modeling
      
      * Set 'is_parallelizable = False'
      
      * Remove '# Copied from' where code was modified and add 'with x->y' when appropriate
      
      * Remove parallelize in mdx
      
      * make style, make quality
      
      * Update GPTSw3Config default values and corresponding documentation
      
      * Update src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Clean up and protect GPTSw3Tokenizer imports with is_sentencepiece_available
      
      * Make style, make quality
      
      * Add dummy object for GPTSw3Tokenizer via 'make fix-copies'
      
      * make fix-copies
      
      * Remove GPTSw3 modeling classes
      
      * make style, make quality
      
      * Add GPTSw3 auto-mappings for other GPT2 heads
      
      * Update docs/source/en/model_doc/gpt-sw3.mdx
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/convert_megatron_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Remove old TODO-comment
      
      * Add example usage to GPTSw3Tokenizer docstring
      
      * make style, make quality
      
      * Add implementation details and example usage to gpt-sw3.mdx
      Co-authored-by: default avatarJoeyOhman <joeyoh@kth.se>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      5f94855d
  8. 07 Dec, 2022 1 commit
  9. 06 Dec, 2022 1 commit
  10. 05 Dec, 2022 1 commit
    • Kamal Raj Kanakarajan's avatar
      Add BioGPT (#20420) · 13e73668
      Kamal Raj Kanakarajan authored
      * biogpt initial commit
      
      * updated init
      
      * fix faster decoding with use_cache
      
      * 1. fix input_ids and input_embeds with correct device
      2. added _keys_to_ignore_on_load_missing
      3. updated prepare_inputs_for_generation
      
      * add activation_dropout and scale_embedding
      
      * replace fsmt attention with bart attention
      
      * added test
      
      * run make fix-copies
      
      * doc init and fix build
      
      * updated README with proper information
      
      * 1. added tips to docs
      2. updated BioGptTokenizer func
      
      * 1. added tokenizer test
      2. refactor tokenizer
      
      * make fixup
      
      * add biogpt fairseq to hf converter
      
      * updated layer names more
      similar to original checkpoints
      
      * config update doc string and set defaults
      
      * added "#copied" from bart model and
      updated doc strings
      
      * enable model_input_names in tokenizer
      
      * 1.  positionalembedding depending on attention_mask
      2. added attention mask to prepare for generation
      
      * added test to verify past and generation
      
      * BioGptLMHeadModel -> BioGptForCausalLM
      
      * fix typo
      
      * tokenization and test
      Copyright and updated assertion
      
      * updated Copyright and
      one func at time in line
      
      * Copyright updates and
      minor doc fix
      
      * replace assertion with ValueError
      
      * rm extra space
      
      * added code syntax
      
      * revert cmnt position change
      
      * add tokenizer to auto
      
      * updated doc string
      
      * tokenizer doc string update
      
      * biogpt hub model update to microsoft/biogpt
      
      * make fixup
      
      * rm cmnt to fix flake8 5.0.4 vs 6 error
      13e73668
  11. 02 Dec, 2022 1 commit
    • fatih's avatar
      [New Model] Add TimeSformer model (#18908) · cc3d0e1b
      fatih authored
      * init timesformer
      
      * apply fix-copies
      
      * reformat style
      
      * revert back some incoorect style updates
      
      * init timesformer
      
      * apply fix-copies
      
      * reformat style
      
      * revert back some incoorect style updates
      
      * update timseformer doc
      
      * add some functions and classes
      
      * add new config params
      
      * implement multiple classes
      
      * update TimeSformerLayer
      
      * update TimeSformerModel, TimeSformerPreTrainedModel, TimeSformerEncoder
      
      * several fixes
      
      * reformat
      
      * temporary update
      
      * fix some typos
      
      * fix weight converter
      
      * more fixes
      
      * fix a typo
      
      * fix typo
      
      * remove redundant params
      
      * fix for latest hf-hub
      
      * merge fix
      
      * fix some checks
      
      * video classification works with einops
      
      * add paper info to docs
      
      * merge fix
      
      * remove redundant line
      
      * remove redundant docstring
      
      * update config
      
      * fix some typos
      
      * fix converter
      
      * update some test constants
      
      * refactor einops functions
      
      * reformat
      
      * fix a comment
      
      * remove redundat imports
      
      * reformat
      
      * fix a typo
      
      * remove comment
      
      * remove unused imports
      
      * remove redundant doc line
      
      * reformat
      
      * add missing line
      
      * fix docs
      
      * fix timesformer auto feat ext
      
      * add unittests
      
      * reformat
      
      * fix docs
      
      * some fixes and updates
      
      * fix readme
      
      * fix modeling
      
      * fix readme
      
      * update index
      
      * revert _toctree.yml changes
      
      * update timseformer.mdx
      
      * update drop_path_prob to drop_path_rate
      
      * add dosctring for drop_path_rate
      
      * update TimeSformerPatchEmbed naming
      
      * remove to_2tuple
      
      * explicit use of nn.functional
      
      * reformat
      
      * many updates from review comments
      
      * fix a typo
      
      * reformat
      
      * remove assert, better variable name
      
      * make variable names more explicit
      
      * add some adapted from
      
      * more explicit variable names
      
      * remove redundant docstring
      
      * fix initilaization
      
      * move permute inside embedding
      
      * update class names
      
      * remove unused imports
      
      * add test for video classification
      
      * update PretrainedModel with PreTrainedModel
      
      * remove double permute
      
      * update based on sylvain's review
      
      * aply auto fix
      
      * update image_processing_auto for timesformer
      
      * update hub urls
      
      * reformat
      
      * remove duplicate import
      
      * update doc link
      cc3d0e1b
  12. 01 Dec, 2022 1 commit
  13. 30 Nov, 2022 1 commit
    • Yang An's avatar
      Add Chinese-CLIP implementation (#20368) · 72176402
      Yang An authored
      
      
      * init chinese-clip model from clip
      
      * init model tests and docs
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * implement chinese-clip into hf
      
      * update usecase example in model implementation
      
      * fix codestyle
      
      * fix model_type typo in readme
      
      * add placeholder in doc
      
      * add placeholder in doc
      
      * update the init script
      
      * update usecase
      
      * fix codestyle
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * forward the convert_rgb
      
      * update testcase
      
      * update testcase
      
      * update testcase
      
      * merge the recent update from clip about model_input_name property
      
      * update the doc
      
      * update the doc
      
      * update the doc
      
      * update the doc
      
      * remove unused imports
      
      * reformat code style
      
      * update the doc
      
      * fix isort style
      
      * bypass a weird failed unit test which is unrelated with my PR
      
      * update the doc
      
      * implement independent vision config class
      
      * implement independent vision model class
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * make style
      
      * fix refactor bug
      
      * make style
      
      * fix refactor bug
      
      * fix refactor bug
      
      * make style
      
      * fix refactor bug
      
      * fix refactor bug
      
      * doc-build restyle
      
      * implement independent text config class
      
      * implement independent text model class
      
      * implement independent text model class
      
      * make style
      
      * make fix-copies
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * fix refactor bug
      
      * make style
      
      * update doc
      
      * black and isort
      
      * update doc
      
      * Update src/transformers/models/chinese_clip/configuration_chinese_clip.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/auto/tokenization_auto.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * modify the model type from chinese-clip to chinese_clip
      
      * format the example comment of ChineseCLIPVisionConfig
      
      * correct the copyright comment
      
      * fix the tokenizer specification
      
      * add copied from for loss function
      
      * remove unused class
      
      * update CHINESE_CLIP_TEXT_INPUTS_DOCSTRING
      
      * update CHINESE_CLIP_INPUTS_DOCSTRING
      
      * update doc
      
      * update doc
      
      * update code comment in config
      
      * update copied from statement
      
      * make style
      
      * rename the doc file
      
      * add copied statement
      
      * remove unused attention_mask, causal_attention_mask in ChineseCLIPVisionEncoder
      
      * remove ChineseCLIPTextPreTrainedModel
      
      * fix bug
      
      * fix bug
      
      * fix bug
      
      * update doc
      
      * make style
      
      * Update src/transformers/models/chinese_clip/configuration_chinese_clip.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/chinese_clip/configuration_chinese_clip.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update ChineseCLIPImageProcessor in image_processing_auto
      
      * fix config_class of chinesecliptextmodel
      
      * fix the test case
      
      * update the docs
      
      * remove the copied from comment for ChineseCLIPTextModel, since it has diverged from BertModel with customed config_class
      
      * update the testcase
      
      * final fix
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      72176402
  14. 21 Nov, 2022 3 commits
    • NielsRogge's avatar
      Add Audio Spectogram Transformer (#19981) · 4973d2a0
      NielsRogge authored
      
      
      * First draft
      
      * Make conversion script work
      
      * Add id2label mapping, run code quality
      
      * Fix copies
      
      * Add first draft of feature extractor
      
      * Update conversion script to use feature extractor
      
      * Make more tests pass
      
      * Add docs
      
      * update input_features to input_values + pad by default to max length
      
      * Fix doc tests
      
      * Add feature extractor tests
      
      * Add proper padding/truncation to feature extractor
      
      * Add support for conversion of all audioset checkpoints
      
      * Improve docs and extend conversion script
      
      * Fix README
      
      * Rename spectogram to spectrogram
      
      * Fix copies
      
      * Add integration test
      
      * Remove dummy conv
      
      * Update to ast
      
      * Update organization
      
      * Fix init
      
      * Rename model to AST
      
      * Add require_torchaudio annotator
      
      * Move import of ASTFeatureExtractor under a is_speech_available
      
      * Fix rebase
      
      * Add pipeline config
      
      * Update name of classifier head
      
      * Rename time_dimension and frequency_dimension for clarity
      
      * Remove print statement
      
      * Fix pipeline test
      
      * Fix pipeline test
      
      * Fix index table
      
      * Fix init
      
      * Fix conversion script
      
      * Rename to ForAudioClassification
      
      * Fix index table
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      4973d2a0
    • Matthijs Hollemans's avatar
      add MobileNetV1 model (#17799) · d21c97cc
      Matthijs Hollemans authored
      * add model files etc for MobileNetV2
      
      rename files for MobileNetV1
      
      initial implementation of MobileNetV1
      
      fix conversion script
      
      cleanup
      
      write docs
      
      tweaks
      
      fix conversion script
      
      extract hidden states
      
      fix test cases
      
      make fixup
      
      fixup it all
      
      remove main from doc link
      
      fixes
      
      fix tests
      
      fix up
      
      use google org
      
      fix weird assert
      
      * fixup
      
      * use google organization for checkpoints
      d21c97cc
    • Raj Rajhans's avatar
      fix: "BigSicence" typo in docs (#20331) · 22d7161a
      Raj Rajhans authored
      22d7161a
  15. 18 Nov, 2022 1 commit
    • Ali Hassani's avatar
      Add Neighborhood Attention Transformer (NAT) and Dilated NAT (DiNAT) models (#20219) · fc4a993e
      Ali Hassani authored
      * Add DiNAT
      
      * Adds DiNAT + tests
      
      * Minor fixes
      
      * Added HF model
      
      * Add natten to dependencies.
      
      * Cleanup
      
      * Minor fixup
      
      * Reformat
      
      * Optional NATTEN import.
      
      * Reformat & add doc to _toctree
      
      * Reformat (finally)
      
      * Dummy objects for DiNAT
      
      * Add NAT + minor changes
      
      Adds NAT as its own independent model + docs, tests
      Adds NATTEN to ext deps to ensure ci picks it up.
      
      * Remove natten from `all` and `dev-torch` deps, add manual pip install to ci tests
      
      * Minor fixes.
      
      * Fix READMEs.
      
      * Requested changes to docs + minor fixes.
      
      * Requested changes.
      
      * Add NAT/DiNAT tests to layoutlm_job
      
      * Correction to Dinat doc.
      
      * Requested changes.
      fc4a993e
  16. 15 Nov, 2022 1 commit
    • Younes Belkada's avatar
      Add Switch transformers (#19323) · 163ac3d3
      Younes Belkada authored
      
      
      * first commit
      
      * add more comments
      
      * add router v1
      
      * clean up
      
      - remove `tf` modeling files
      
      * clean up
      
      - remove `tf` modeling files
      
      * clean up
      
      * v0 routers
      
      * added more router
      
      - Implemented `ExpertsChooseMaskedRouter`
      
      - added tests
      - 2 more routers to implement
      
      * last router
      
      * improved docstring
      
      - completed the docstring in `router.py`
      - added more args in the config
      
      * v0 sparse mlp
      
      * replace wrong naming
      
      * forward pass run
      
      * update MOE layer
      
      * small router update
      
      * fixup
      
      * consistency
      
      * remove scatter router
      
      * remove abstract layer
      
      * update test and model for integration testing
      
      * v1 conversion
      
      * update
      
      * hardcode hack
      
      * all keys match
      
      * add gin conversion, without additional libraries
      
      * update conversion sctipy
      
      * delete router file
      
      * update tests wrt router deletion
      
      * fix router issues
      
      * update expert code
      
      * update, logits match, code needsREFACTORING
      
      * Refactor code
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * add generate tests
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      
      * add support for router loss
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * fix forward error
      
      * refactor a bit
      
      * remove `FlaxSwitchTransformers` modules
      
      * more tests pass
      
      * Update code
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      
      * fixup
      
      * fix tests
      
      * fix doc
      
      * fix doc + tokenization
      
      * fix tokenizer test
      
      * fix test
      
      * fix loss output
      
      * update code for backward pass
      
      * add loss support
      
      * update documentation
      
      * fix documentation, clean tokenizer
      
      * more doc fix, cleanup example_switch
      
      * fix failing test
      
      * fix test
      
      * fix test
      
      * fix loss issue
      
      * move layer
      
      * update doc and fix router capacity usage
      
      * fixup
      
      * add sparse mlp index for documentation on hub
      
      * fixup
      
      * test sparse mix architecture
      
      * Apply suggestions from code review
      
      * Update docs/source/en/model_doc/switch_transformers.mdx
      
      * fixup on update
      
      * fix tests
      
      * fix another test
      
      * attempt fix
      
      * Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/switch_transformers/convert_switch_transformers_original_flax_checkpoint_to_pytorch.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * try
      
      * all tests pass
      
      * fix jitter noise
      
      * Apply suggestions from code review
      
      * doc tests pass
      
      * Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove assert
      
      * change config order
      
      * fix readme japanese
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * remove parallelizable tests + add one liners
      
      * remove ONNX config
      
      * fix nits
      
      - add `T5Tokenizer` in auto mapping
      - remove `Switch Transformers` from ONNX supported models
      
      * remove `_get_router`
      
      * remove asserts
      
      * add check in test for `router_dtype`
      
      * add `SwitchTransformersConfig` in `run_pipeline_test`
      
      * Update tests/pipelines/test_pipelines_summarization.py
      
      * add huge model conversion script
      
      * fix slow tests
      
      - add better casting for `Linear8bitLt`
      - remove `torchscript` tests
      
      * add make dir
      
      * style on new script
      
      * fix nits
      
      - doctest
      - remove `_keys_to_ignore_on_load_unexpected`
      
      * Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
      
      * add google as authors
      
      * fix year
      
      * remove last `assert` statements
      
      * standardize vertical spaces
      
      * fix failing import
      
      * fix another failing test
      
      * Remove strange 脿uthorized_keys`
      
      * removing todo and padding that is never used
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarybelkada <younes@huggingface.co>
      Co-authored-by: default avatarYounes Belkada <younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur Zucker <arthur@huggingface.co>
      163ac3d3
  17. 14 Nov, 2022 1 commit
    • Matthijs Hollemans's avatar
      add MobileNetV2 model (#17845) · f711d683
      Matthijs Hollemans authored
      * add model files etc for MobileNetV2
      
      * rename files for MobileNetV1
      
      * initial implementation of MobileNetV1
      
      * fix conversion script
      
      * cleanup
      
      * write docs
      
      * tweaks
      
      * fix conversion script
      
      * extract hidden states
      
      * fix test cases
      
      * make fixup
      
      * fixup it all
      
      * rename V1 to V2
      
      * fix checkpoints
      
      * fixup
      
      * implement first block + weight conversion
      
      * add remaining layers
      
      * add output stride and dilation
      
      * fixup
      
      * add tests
      
      * add deeplabv3+ head
      
      * a bit of fixup
      
      * finish deeplab conversion
      
      * add link to doc
      
      * fix issue with JIT trace
      
      in_height and in_width would be Tensor objects during JIT trace, which caused Core ML conversion to fail on the remainder op. By making them ints, the result of the padding calculation becomes a constant value.
      
      * cleanup
      
      * fix order of models
      
      * fix rebase error
      
      * remove main from doc link
      
      * add image processor
      
      * remove old feature extractor
      
      * fix converter + other issues
      
      * fixup
      
      * fix unit test
      
      * add to onnx tests (but these appear broken now)
      
      * add post_process_semantic_segmentation
      
      * use google org
      
      * remove unused imports
      
      * move args
      
      * replace weird assert
      f711d683
  18. 10 Nov, 2022 1 commit
  19. 08 Nov, 2022 2 commits
    • Weiwe Shi's avatar
      Add RocBert (#20013) · efa889d2
      Weiwe Shi authored
      
      
      * add roc_bert
      
      * update roc_bert readme
      
      * code style
      
      * change name and delete unuse file
      
      * udpate model file
      
      * delete unuse log file
      
      * delete tokenizer fast
      
      * reformat code and change model file path
      
      * add RocBertForPreTraining
      
      * update docs
      
      * delete wrong notes
      
      * fix copies
      
      * fix make repo-consistency error
      
      * fix files are not present in the table of contents error
      
      * change RocBert -> RoCBert
      
      * add doc, add detail test
      Co-authored-by: default avatarweiweishi <weiweishi@tencent.com>
      efa889d2
    • NielsRogge's avatar
      Add CLIPSeg (#20066) · 25896306
      NielsRogge authored
      
      
      * Add first draft
      
      * Update conversion script
      
      * Improve conversion script
      
      * Improve conversion script some more
      
      * Add conditional embeddings
      
      * Add initial decoder
      
      * Fix activation function of decoder
      
      * Make decoder outputs match original implementation
      
      * Make decoder outputs match original implementation
      
      * Add more copied from statements
      
      * Improve model outputs
      
      * Fix auto tokenizer file
      
      * Fix more tests
      
      * Add test
      
      * Improve README and docs, improve conditional embeddings
      
      * Fix more tests
      
      * Remove print statements
      
      * Remove initial embeddings
      
      * Improve conversion script
      
      * Add interpolation of position embeddings
      
      * Finish addition of interpolation of position embeddings
      
      * Add support for refined checkpoint
      
      * Fix refined checkpoint
      
      * Remove unused parameter
      
      * Improve conversion script
      
      * Add support for training
      
      * Fix conversion script
      
      * Add CLIPSegFeatureExtractor
      
      * Fix processor
      
      * Fix CLIPSegProcessor
      
      * Fix conversion script
      
      * Fix most tests
      
      * Fix equivalence test
      
      * Fix README
      
      * Add model to doc tests
      
      * Use better variable name
      
      * Convert other checkpoint as well
      
      * Update config, add link to paper
      
      * Add docs
      
      * Update organization
      
      * Replace base_model_prefix with clip
      
      * Fix base_model_prefix
      
      * Fix checkpoint of config
      
      * Fix config checkpoint
      
      * Remove file
      
      * Use logits for output
      
      * Fix tests
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      25896306
  20. 04 Nov, 2022 1 commit
  21. 01 Nov, 2022 2 commits
  22. 26 Oct, 2022 2 commits
  23. 18 Oct, 2022 1 commit
    • NielsRogge's avatar
      Add table transformer [v2] (#19614) · dd523da5
      NielsRogge authored
      * First draft
      
      * Add conversion script
      
      * Make conversion work
      
      * Upload checkpoints
      
      * Add final fixes
      
      * Revert changes of conditional and deformable detr
      
      * Fix toctree, add and remove copied from
      
      * Use model type
      
      * Improve docs
      
      * Improve code example
      
      * Update copies
      
      * Add copied formt
      
      * Don't update conditional detr
      
      * Don't update deformable detr
      dd523da5
  24. 17 Oct, 2022 2 commits
  25. 12 Oct, 2022 1 commit
    • NielsRogge's avatar
      Add LiLT (#19450) · 4d367a3c
      NielsRogge authored
      
      
      * First draft
      
      * Fix more things
      
      * Improve more things
      
      * Remove some head models
      
      * Fix more things
      
      * Add missing layers
      
      * Remove tokenizer
      
      * Fix more things
      
      * Fix copied from statements
      
      * Make all tests pass
      
      * Remove print statements
      
      * Remove files
      
      * Fix README and docs
      
      * Add integration test and fix organization
      
      * Add tips
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Make tests faster, improve docs
      
      * Fix doc tests
      
      * Add model to toctree
      
      * Add docs
      
      * Add note about creating new checkpoint
      
      * Remove is_decoder
      
      * Make tests smaller, add docs
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      4d367a3c
  26. 11 Oct, 2022 1 commit
  27. 10 Oct, 2022 2 commits
    • Lysandre's avatar
      Dev version · 10100979
      Lysandre authored
      10100979
    • amyeroberts's avatar
      Add TF whisper (#19378) · e3f028f3
      amyeroberts authored
      
      
      * simplify loop
      
      * add featur extractor
      
      * add model
      
      * start conversion
      
      * add dropout
      
      * initial commit of test files
      
      * copnversion for all models
      
      * update processor for correct padding
      
      * update feature extraction
      
      * update integration test logits match
      
      * fmnt: off for the logits
      
      * on the fly mel bank
      
      * small nit
      
      * update test
      
      * update tokenizer
      
      * nit feature extraction
      
      * update
      
      * update tokenizer test
      
      * adds logit processor and update tokenizer to get supress tokens
      
      * style
      
      * clean convert
      
      * revert to original modeling tf utils
      
      * Update
      
      * update
      
      * nit
      
      * clean convert file
      
      * update tests and nits
      
      * quality
      
      * slow generation test
      
      * ffn_dim to allow customization
      
      * update readme
      
      * add to toctreee
      
      * start fixing integration tests
      
      * update tests and code
      
      * fix feature extractor
      
      * fix config tests common
      
      * update code to fix tests
      
      * fix feature exctractor
      
      * nit feature extraction
      
      * update test for new feature extractor
      
      * style
      
      * add absrtact
      
      * large logits wioth custom decoder input ids
      
      * wraap around is otrch available
      
      * fix feature extractor
      
      * correct logits for whisper small.en
      
      * nit
      
      * fix encoder_attentino_mask
      
      * some fixes
      
      * remove unnecessary inputs
      
      * nits
      
      * add normalizer file
      
      * update etst tokenization
      
      * fix attention mask not defined
      
      * fix generate
      
      * remove uncoder attention mask useless
      
      * update test modeling whisper
      
      * update condfig to add second non supress tokens
      
      * nits on feature exrtactor
      
      * nit for test tokenizers
      
      * update etsts
      
      * update tests
      
      * update tokenization test
      
      * fixup
      
      * invalidated hf token. Clean convert openai to whisper
      
      * fix logit tests
      
      * fixup
      
      * Add model to README
      
      * Fix doc tests
      
      * clean merge
      
      * revert toc_tree changes
      
      * remove useless LogitProcessor
      
      * Update whisper .mdx
      
      * update config file doc
      
      * update configuration docstring
      
      * update test tokenization
      
      * update test tokenization
      
      * update tokenization whisper
      Added copied from where needed
      
      * update feature extraction
      
      * nit test name
      
      * style
      
      * quality
      
      * remove get suppress tokens and update non_speech tokens global variables
      
      * Update src/transformers/models/whisper/feature_extraction_whisper.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * clean modeling whisper and test
      Removed the attention mask arguments that are deprecated
      
      * fix large test
      
      * Add multilingual audio test, and translate test
      
      * style
      
      * fix larg multilingual test
      
      * nits
      
      * add copied from for attention layer
      
      * remove attention masks in doc
      
      * add english normalizer
      
      * Update docs/source/en/model_doc/whisper.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * update tokenization test
      
      * remove copied from in whisper attention : no bias in k_proj only
      
      * wrap around dependencies in english normalizer
      
      * style
      
      * correct import generation logits
      
      * for now, wrap feature extractor with torch
      
      * remove torch depencies for feature extraction and style
      
      * Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/whisper.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixup
      
      * nit
      
      * update logitds
      
      * style
      
      * nit
      
      * nits and fix final tests
      
      * add `is_more_itertools_available` to utils
      
      * quality
      
      * add begin supress tokens, supress tokens to generate args and config
      
      * clean supressTokensLogitProcessor in generation logits
      
      * Nit naming
      
      * add supressTokensAtBegin
      
      * udpate tests, supress tokens to None or correct values
      
      * nit and style
      
      * update RAG to fit test and generate_logit
      
      * add copy pasted statment on english normalizer
      
      * add arguments to config_common_kwargs
      
      * Update src/transformers/generation_utils.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/generation_logits_process.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * revert changes based on reviews
      
      * update doc and nits
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * more nits
      
      * last nits
      
      * update test configuration common
      
      * add BART name in decoder attention mask documentation
      
      * Update src/transformers/models/whisper/modeling_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * style
      
      * nit
      
      * nit
      
      * add english.json file to git
      
      * nits on documentation
      
      * nit
      
      * nits
      
      * last styling
      
      * add main toctree file
      
      * remove sentence piece dependency
      
      * clean init file
      
      * fix tokenizer that has no dependencies on sentencepiece
      
      * update whisper init file, nit
      
      * remove english.json file
      
      * add get decoder prompt id
      
      * All weights loading
      
      * Remove hanging pdb
      
      * Fixup and tidy up
      
      * Use same copied from as PT model
      
      * Remove whitespace changes
      
      * Remove torch references
      
      * Tie embeddings
      
      * Remove logits processor input to generate
      
      * Update logit values
      
      * revert changes and add forced logit processor
      
      * nit
      
      * clean normalizer
      
      * remove protected
      
      * Add logit processors and update generation code & tests
      
      * Some tidy up
      
      * Update docstring
      
      * update
      
      * update based on review
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update to reflect changes on the PT model branch
      
      * Tidy up
      
      * Remove extra whitespace
      
      * Fix test - make input ids small enough we can append
      
      * Include upstream changes on main
      
      * PR comments - add batch tests, remove comments & defaults
      
      * Fix model output imports
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/generation_tf_logits_process.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update tests/models/whisper/test_modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update docstring example
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Remove changes to adjust_logits_during_generation function
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Tidy up imports that don't require TF
      
      * Update tests - skip and no more skip
      
      * Update tests/generation/test_generation_tf_logits_process.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Add training flags
      
      * Add (skipped) XLA generation tests
      
      * Add embedding correctness test
      
      * Add constant ids for generation tests
      
      * Make logits finding a bit tidier
      
      * Remove unused args
      
      * xla generation enabled
      
      * Don't skip XLA tests anymore
      
      * Fix tests - add position ids to expected signature and update rag generation
      
      * Undo method reorder
      
      * Remove added whitespace
      
      * Remove copy-paste gradient checkopint ref
      
      * Remove
      
      * Trigger CI - (issue with refs when pulling)
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNielsRogge <niels.rogge1@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      e3f028f3
  28. 05 Oct, 2022 1 commit
    • Arthur's avatar
      Add WhisperModel to transformers (#19166) · 45e14038
      Arthur authored
      
      
      * simplify loop
      
      * add featur extractor
      
      * add model
      
      * start conversion
      
      * add dropout
      
      * initial commit of test files
      
      * copnversion for all models
      
      * update processor for correct padding
      
      * update feature extraction
      
      * update integration test logits match
      
      * fmnt: off for the logits
      
      * on the fly mel bank
      
      * small nit
      
      * update test
      
      * update tokenizer
      
      * nit feature extraction
      
      * update
      
      * update tokenizer test
      
      * adds logit processor and update tokenizer to get supress tokens
      
      * style
      
      * clean convert
      
      * revert to original modeling tf utils
      
      * Update
      
      * update
      
      * nit
      
      * clean convert file
      
      * update tests and nits
      
      * quality
      
      * slow generation test
      
      * ffn_dim to allow customization
      
      * update readme
      
      * add to toctreee
      
      * start fixing integration tests
      
      * update tests and code
      
      * fix feature extractor
      
      * fix config tests common
      
      * update code to fix tests
      
      * fix feature exctractor
      
      * nit feature extraction
      
      * update test for new feature extractor
      
      * style
      
      * add absrtact
      
      * large logits wioth custom decoder input ids
      
      * wraap around is otrch available
      
      * fix feature extractor
      
      * correct logits for whisper small.en
      
      * nit
      
      * fix encoder_attentino_mask
      
      * some fixes
      
      * remove unnecessary inputs
      
      * nits
      
      * add normalizer file
      
      * update etst tokenization
      
      * fix attention mask not defined
      
      * Add model to README
      
      * Fix doc tests
      
      * fix generate
      
      * remove uncoder attention mask useless
      
      * update test modeling whisper
      
      * update condfig to add second non supress tokens
      
      * nits on feature exrtactor
      
      * nit for test tokenizers
      
      * update etsts
      
      * update tests
      
      * update tokenization test
      
      * fixup
      
      * invalidated hf token. Clean convert openai to whisper
      
      * fix logit tests
      
      * fixup
      
      * clean merge
      
      * revert toc_tree changes
      
      * remove useless LogitProcessor
      
      * Update whisper .mdx
      
      * update config file doc
      
      * update configuration docstring
      
      * update test tokenization
      
      * update test tokenization
      
      * update tokenization whisper
      Added copied from where needed
      
      * update feature extraction
      
      * nit test name
      
      * style
      
      * quality
      
      * remove get suppress tokens and update non_speech tokens global variables
      
      * Update src/transformers/models/whisper/feature_extraction_whisper.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * clean modeling whisper and test
      Removed the attention mask arguments that are deprecated
      
      * fix large test
      
      * Add multilingual audio test, and translate test
      
      * style
      
      * fix larg multilingual test
      
      * nits
      
      * Update docs/source/en/model_doc/whisper.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add copied from for attention layer
      
      * remove attention masks in doc
      
      * add english normalizer
      
      * update tokenization test
      
      * remove copied from in whisper attention : no bias in k_proj only
      
      * wrap around dependencies in english normalizer
      
      * style
      
      * correct import generation logits
      
      * for now, wrap feature extractor with torch
      
      * Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/whisper.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * remove torch depencies for feature extraction and style
      
      * fixup
      
      * nit
      
      * update logitds
      
      * style
      
      * nit
      
      * nits and fix final tests
      
      * add `is_more_itertools_available` to utils
      
      * quality
      
      * add begin supress tokens, supress tokens to generate args and config
      
      * clean supressTokensLogitProcessor in generation logits
      
      * Nit naming
      
      * add supressTokensAtBegin
      
      * udpate tests, supress tokens to None or correct values
      
      * nit and style
      
      * update RAG to fit test and generate_logit
      
      * add copy pasted statment on english normalizer
      
      * add arguments to config_common_kwargs
      
      * Update src/transformers/generation_utils.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/generation_logits_process.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * revert changes based on reviews
      
      * update doc and nits
      
      * more nits
      
      * last nits
      
      * update test configuration common
      
      * add BART name in decoder attention mask documentation
      
      * Update src/transformers/models/whisper/modeling_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * style
      
      * nit
      
      * nit
      
      * add english.json file to git
      
      * nits on documentation
      
      * nit
      
      * nits
      
      * last styling
      
      * add main toctree file
      
      * remove sentence piece dependency
      
      * clean init file
      
      * fix tokenizer that has no dependencies on sentencepiece
      
      * update whisper init file, nit
      
      * remove english.json file
      
      * add get decoder prompt id
      
      * revert changes and add forced logit processor
      
      * nit
      
      * clean normalizer
      
      * remove protected
      
      * update
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update based on review
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add batched tests
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNielsRogge <niels.rogge1@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      45e14038
  29. 04 Oct, 2022 1 commit
  30. 30 Sep, 2022 2 commits
    • Kashif Rasul's avatar
      time series forecasting model (#17965) · 5cd16f01
      Kashif Rasul authored
      
      
      * initial files
      
      * initial model via cli
      
      * typos
      
      * make a start on the model config
      
      * ready with configuation
      
      * remove tokenizer ref.
      
      * init the transformer
      
      * added initial model forward to return dec_output
      
      * require gluonts
      
      * update dep. ver table and add as extra
      
      * fixed typo
      
      * add type for prediction_length
      
      * use num_time_features
      
      * use config
      
      * more config
      
      * typos
      
      * opps another typo
      
      * freq can be none
      
      * default via transformation is 1
      
      * initial transformations
      
      * fix imports
      
      * added transform_start_field
      
      * add helper to create pytorch dataloader
      
      * added inital val and test data loader
      
      * added initial distr head and loss
      
      * training working
      
      * remove TimeSeriesTransformerTokenizer
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/__init__.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/__init__.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixed copyright
      
      * removed docs
      
      * remove time series tokenizer
      
      * fixed docs
      
      * fix text
      
      * fix second
      
      * fix default
      
      * fix order
      
      * use config directly
      
      * undo change
      
      * fix comment
      
      * fix year
      
      * fix import
      
      * add additional arguments for training vs. test
      
      * initial greedy inference loop
      
      * fix inference
      
      * comment out token inputs to enc dec
      
      * Use HF encoder/decoder
      
      * fix inference
      
      * Use Seq2SeqTSModelOutput output
      
      * return Seq2SeqTSPredictionOutput
      
      * added default arguments
      
      * fix return_dict true
      
      * scale is a tensor
      
      * output static_features for inference
      
      * clean up some unused bits
      
      * fixed typo
      
      * set return_dict if none
      
      * call model once for both train/predict
      
      * use cache if future_target is none
      
      * initial generate func
      
      * generate arguments
      
      * future_time_feat is required
      
      * return SampleTSPredictionOutput
      
      * removed unneeded classes
      
      * fix when params is none
      
      * fix return dict
      
      * fix num_attention_heads
      
      * fix arguments
      
      * remove unused shift_tokens_right
      
      * add different dropout configs
      
      * implement FeatureEmbedder, Scaler and weighted_average
      
      * remove gluonts dependency
      
      * fix class names
      
      * avoid _variable names
      
      * remove gluonts dependency
      
      * fix imports
      
      * remove gluonts from configuration
      
      * fix docs
      
      * fixed typo
      
      * move utils to examples
      
      * add example requirements
      
      * config has no freq
      
      * initial run_ts_no_trainer
      
      * remove from ignore
      
      * fix output_attentions and removed unsued getters/setters
      
      * removed unsed tests
      
      * add dec seq len
      
      * add test_attention_outputs
      
      * set has_text_modality=False
      
      * add config attribute_map
      
      * make style
      
      * make fix-copies
      
      * add encoder_outputs to TimeSeriesTransformerForPrediction forward
      
      * Improve docs, add model to README
      
      * added test_forward_signature
      
      * More improvements
      
      * Add more copied from
      
      * Fix README
      
      * Fix remaining quality issues
      
      * updated encoder and decoder
      
      * fix generate
      
      * output_hidden_states and use_cache are optional
      
      * past key_values returned too
      
      * initialize weights of distribution_output module
      
      * fixed more tests
      
      * update test_forward_signature
      
      * fix return_dict outputs
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * removed commented out tests
      
      * added neg. bin and normal output
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * move to one line
      
      * Add docstrings
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * add try except for assert and raise
      
      * try and raise exception
      
      * fix the documentation formatting
      
      * fix assert call
      
      * fix docstring formatting
      
      * removed input_ids from DOCSTRING
      
      * Update input docstring
      
      * Improve variable names
      
      * Update order of inputs
      
      * Improve configuration
      
      * Improve variable names
      
      * Improve docs
      
      * Remove key_length from tests
      
      * Add extra docs
      
      * initial unittests
      
      * added test_inference_no_head test
      
      * added test_inference_head
      
      * add test_seq_to_seq_generation
      
      * make style
      
      * one line
      
      * assert mean prediction
      
      * removed comments
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fix order of args
      
      * make past_observed_mask optional as well
      
      * added Amazon license header
      
      * updated utils with new fieldnames
      
      * make style
      
      * cleanup
      
      * undo position of past_observed_mask
      
      * fix import
      
      * typo
      
      * more typo
      
      * rename example files
      
      * remove example for now
      
      * Update docs/source/en/_toctree.yml
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update modeling_time_series_transformer.py
      
      fix style
      
      * fixed typo
      
      * fix typo and grammer
      
      * fix style
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <niels.rogge1@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      5cd16f01
    • Matt's avatar
      Rebase ESM PR and update all file formats (#19055) · 368b649a
      Matt authored
      
      
      * Rebase ESM PR and update all file formats
      
      * Fix test relative imports
      
      * Add __init__.py to the test dir
      
      * Disable gradient checkpointing
      
      * Remove references to TFESM... FOR NOW >:|
      
      * Remove completed TODOs from tests
      
      * Convert docstrings to mdx, fix-copies from BERT
      
      * fix-copies for the README and index
      
      * Update ESM's __init__.py to the modern format
      
      * Add to _toctree.yml
      
      * Ensure we correctly copy the pad_token_id from the original ESM model
      
      * Ensure we correctly copy the pad_token_id from the original ESM model
      
      * Tiny grammar nitpicks
      
      * Make the layer norm after embeddings an optional flag
      
      * Make the layer norm after embeddings an optional flag
      
      * Update the conversion script to handle other model classes
      
      * Remove token_type_ids entirely, fix attention_masking and add checks to convert_esm.py
      
      * Break the copied from link from BertModel.forward to remove token_type_ids
      
      * Remove debug array saves
      
      * Begin ESM-2 porting
      
      * Add a hacky workaround for the precision issue in original repo
      
      * Code cleanup
      
      * Remove unused checkpoint conversion code
      
      * Remove unused checkpoint conversion code
      
      * Fix copyright notices
      
      * Get rid of all references to the TF weights conversion
      
      * Remove token_type_ids from the tests
      
      * Fix test code
      
      * Update src/transformers/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Add credit
      
      * Remove _ args and __ kwargs in rotary embedding
      
      * Assertively remove asserts
      
      * Replace einsum with torch.outer()
      
      * Fix docstring formatting
      
      * Remove assertions in tokenization
      
      * Add paper citation to ESMModel docstring
      
      * Move vocab list to single line
      
      * Remove ESMLayer from init
      
      * Add Facebook copyrights
      
      * Clean up RotaryEmbedding docstring
      
      * Fix docstring formatting
      
      * Fix docstring for config object
      
      * Add explanation for new config methods
      
      * make fix-copies
      
      * Rename all the ESM- classes to Esm-
      
      * Update conversion script to allow pushing to hub
      
      * Update tests to point at my repo for now
      
      * Set config properly for tests
      
      * Remove the gross hack that forced loss of precision in inv_freq and instead copy the data from the model being converted
      
      * make fixup
      
      * Update expected values for slow tests
      
      * make fixup
      
      * Remove EsmForCausalLM for now
      
      * Remove EsmForCausalLM for now
      
      * Fix padding idx test
      
      * Updated README and docs with ESM-1b and ESM-2 separately (#19221)
      
      * Updated README and docs with ESM-1b and ESM-2 separately
      
      * Update READMEs, longer entry with 3 citations
      
      * make fix-copies
      Co-authored-by: default avatarYour Name <you@example.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarTom Sercu <tsercu@fb.com>
      Co-authored-by: default avatarYour Name <you@example.com>
      368b649a