1. 17 Apr, 2024 1 commit
    • Shane A's avatar
      Add OLMo model family (#29890) · e4ea19b9
      Shane A authored
      * Add OLMo using add-new-model-like with Llama
      
      * Fix incorrect tokenizer for OLMo
      
      * Copy-paste relevant OLMo methods and their imports
      
      * Add OLMo config
      
      * Modify OLMo config to follow HF conventions
      
      * Remove unneeded Llama code from OLMo model
      
      * Add ability for OLMo model to output attentions
      
      * Add OLMoPreTrainedModel and OLMoModel
      
      * Add OLMoForCausalLM
      
      * Minor fixes to OLMo model for style and missing functions
      
      * Implement OLMo tokenizer
      
      * Implement OLMo to HF conversion script
      
      * Add tests for OLMo model
      
      * Add tests for OLMo fast tokenizer
      
      * Add auto-generated dummy objects
      
      * Remove unimplemented OLMo classes from auto and init classes and re-format
      
      * Add README and associated auto-generated files
      
      * Use OLMo names for common properties
      
      * Run make fixup
      
      * Remove `|` from OLMo typing
      
      * Remove unneeded tokenization_olmo.py
      
      * Revert model, config and converter to add-new-model-like Llama
      
      * Move logic for adding bos/eos token into GPTNeoxTokenizerFast
      
      * Change OLMoConfig defaults to match OLMo-7B
      
      * Use GPTNeoXToknizerFast in OLMo tokenizer tests
      
      * Modify auto-generated OLMoModelTests to work for OLMo
      
      * Add non-parametric layer norm OLMoLayerNorm
      
      * Update weight conversion script for OLMo
      
      * Fix __init__ and auto structure for OLMo
      
      * Fix errors from make fixup
      
      * Remove OLMoTokenizerFast from documentation
      
      * Add missing 'Copied from' for OLMoModel._update_causal_mask
      
      * Run make fix-copies
      
      * Rearrange string replacements in OLMoForCausalLM Copied from
      
      * Move OLMo and Llama CausalLM.forward example into global constants
      
      * Fix OLMO_GENERATION_EXAMPLE doc string typo
      
      * Add option for qkv clipping to OLMo
      
      * Rearrange OLMoConfig kwargs in convert_olmo_weights_to_hf
      
      * Add clip_qkv to OLMoConfig in convert_olmo_weights_to_hf
      
      * Fix OLMo tokenization bug using conversion script
      
      * Keep model in full precision after conversion
      
      * Do not add eos token automatically
      
      * Update references to OLMo model in HF Hub
      
      * Do not add eos token during encoding by default
      
      * Fix Llama generation example
      
      * Run make fixup
      
      * OLMo 7B integration test fix
      
      * Remove unneeded special case for OLMoConfig
      
      * OLMo 7B Twin 2T integration test fix
      
      * Fix test_model_7b_greedy_generation
      
      * Remove test_compile_static_cache
      
      * Fix OLMo and Llama generation example
      
      * Run make fixup
      
      * Revert "OLMo 7B integration test fix"
      
      This reverts commit 4df56a4b150681bfa559846f40e9b7b7f97d7908.
      
      * Revert "OLMo 7B Twin 2T integration test fix"
      
      This reverts commit 9ff65a4a294ace89ab047b793ca55e623a9ceefc.
      
      * Ungate 7B integration tests and fix greedy generation test
      
      * Add retries for flaky test_eager_matches_sdpa_generate
      
      * Fix output of doc example for OLMoForCausalLM.forward
      
      * Downsize OLMo doc test for OLMoForCausalLM.forward to 1B model
      
      * Try fix incorrect characters in OLMoForCausalLM.forward doct test
      
      * Try fix incorrect characters in OLMoForCausalLM.forward doc test using end quotes
      
      * Remove pretraining_tp from OLMo config and model
      
      * Add missing 'Copied from' instances
      
      * Remove unneeded causal_mask from OLMoModel
      
      * Revert Llama changes
      
      * Ignore copy for OLMoForCausalLM.forward
      
      * Change 'OLMo' to 'Olmo' in classes
      
      * Move minimal OLMo tokenization tests to model tests
      
      * Add missed 'Copied from' for repeat_kv
      e4ea19b9
  2. 15 Apr, 2024 1 commit
    • amyeroberts's avatar
      Add Idefics2 (#30253) · 6b78360e
      amyeroberts authored
      
      
      * Initial add model additions
      
      * Test
      
      * All weights loading
      
      * Can perform full forward pass
      
      * Local and remote the same
      
      * Matching local and remote
      
      * Fixup
      
      * Idefics2Model importable; fixup docstrings
      
      * Don't skip by default
      
      * Remove deprecated use_resampler arg
      
      * Remove self.config
      
      * DecoupledLinear takes config
      
      * Tidy up
      
      * Enable eager attention and tidy up
      
      * Most tests passing
      
      * Update for batch of processed images
      
      * Add image processor
      
      * Update doc pages
      
      * Update conversion script
      
      * Remove erroneous breakpoint
      
      * Remove accidendtal spelling change
      
      * Update to reflect changes on hub - make generate work
      
      * Fix up
      
      * Image processor tests
      
      * Update tests
      
      * Add a processor
      
      * Add a processor
      
      * Update convert script
      
      * Update modeling file - remove fixmes
      
      * Bug fix
      
      * Add processing test
      
      * Use processor
      
      * Fix up
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Fix test
      
      * Update config - PR comments and defaults align with checkpoint
      
      * Reviewer comments
      
      * Add copied froms for flahs attention
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Remove qk_layer_norm and freeze_layers functionality
      
      * Fix
      
      * Remove freeze_layer options from config
      
      * Sync with upstream main
      
      * Fix attention shapes siglip
      
      * Remove Llava-next refs - TO REBASE
      
      * Use AutoModel for text model
      
      * Add comment to explain vision embeddings
      
      * Fix issue with tie_word_embeddings
      
      * Address review comments
      
      * Fix and fix up
      
      * Chat templates for idefics
      
      * Fix copies
      
      * Fix
      
      * Add layer norms to FA2
      
      * Fix tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Fix
      
      * Review comments
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update inputs merger
      
      * Merge weights in correct order
      
      * Update convert script
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update template
      
      * Model code examples (fix idefics too)
      
      * More review comments
      
      * Tidy up
      
      * Update processing
      
      * Fix attention mask preparation
      
      * Update inputs_merger inputs
      
      * Vectorize inputs_merger
      
      * Update src/transformers/models/idefics2/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      
      * Review comments
      
      * saying bye to the `qk_layer_norms`
      
      * Simplify
      
      * Update latents
      
      * Remove erroneuous readme changes
      
      * Return images when applying chat template
      
      * Fix bug - prompt images are for a single sample
      
      * Update src/transformers/models/idefics2/modeling_idefics2.py
      
      * image splitting
      
      * fix test
      
      * some more comment
      
      * some comment
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/idefics2/image_processing_idefics2.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update processor
      
      * Update model tests
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Don't add BOS in template
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Remove index in examples
      
      * Update tests to reflect #13
      
      * Update src/transformers/models/idefics2/processing_idefics2.py
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * PR comment - consistent typing
      
      * Update readme and model doc
      
      * Update docs
      
      * Update checkpoint references
      
      * Update examples
      
      * Fix and update tests
      
      * Small addition
      
      * Update tests - remove copied from as no ignore placement copy could be found
      
      * Update example
      
      * small fixes
      
      * Update docs/source/en/model_doc/idefics2.md
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update docs/source/en/model_doc/idefics2.md
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Update README.md
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      
      * Connector model as bridge
      
      * Fix up
      
      * Fix up
      
      * Don't pass model inputs for generation kwargs update
      
      * IDEFICS-2 -> Idefics2
      
      * Remove config archive name
      
      * IDEFICS-2 -> Idefics2
      
      * Add back llava-next
      
      * Update readmes
      
      * Add requirements for processor tester
      
      * Use custom convert_to_rgb to avoid possible BC
      
      * Fix doc example
      
      * Fix doc example
      
      * Skip model doc tests - as model to large
      
      * More doc example - account for image splitting
      
      * Update src/transformers/image_transforms.py
      
      * Fix config doctest
      
      ---------
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      Co-authored-by: default avatarArthurZucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarVictor SANH <victorsanh@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      6b78360e
  3. 11 Apr, 2024 1 commit
    • Eduardo Pacheco's avatar
      Adding grounding dino (#26087) · b752ad30
      Eduardo Pacheco authored
      
      
      * Fixed typo when converting weigths to GroundingDINO vision backbone
      
      * Final modifications on modeling
      
      * Removed unnecessary class
      
      * Fixed convert structure
      
      * Added image processing
      
      * make fixup partially completed
      
      * Now text_backbone_config has its own class
      
      * Modified convert script
      
      * Removed unnecessary config attribute
      
      * Added new function to generate sub sentence mask
      
      * Renamed parameters with gamma in the name as it's currently not allowed
      
      * Removed tokenization and image_processing scripts since we'll map from existing models
      
      * Fixed some issues with configuration
      
      * Just some modifications on conversion script
      
      * Other modifications
      
      * Copied deformable detr
      
      * First commit
      
      * Added bert to model
      
      * Bert validated
      
      * Created Text and Fusion layers for Encoder
      
      * Adapted Encoder layer
      
      * Fixed typos
      
      * Adjusted Encoder
      
      * Converted encoder to hf
      
      * Modified Decoder Layer
      
      * Modified main decoder class
      
      * Removed copy comments
      
      * Fixed forward from GroundingDINOModel and GroundingDINODecoder
      
      * Added all necessary layers, configurations and forward logic up to GroundingDINOModel
      
      * Added all layers to convertion
      
      * Fixed outputs for GroundingDINOModel and GroundingDINOForObjectDetection
      
      * Fixed mask input to encoders and fixed nn.MultiheadAttention batch first and attn output
      
      * Fixed forward from GroundingDINOTextEnhancerLayer
      
      * Fixed output bug with GroundingDINODeformableLayer
      
      * Fixed bugs that prevent GroundingDINOForObjectDetection to run forward method
      
      * Fixed attentions to be passed correctly
      
      * Passing temperature arg when creating Sine position embedding
      
      * Removed copy comments
      
      * Added temperature argument for position embedding
      
      * Fixed typo when converting weigths to GroundingDINO vision backbone
      
      * Final modifications on modeling
      
      * Removed unnecessary class
      
      * Fixed convert structure
      
      * Added image processing
      
      * make fixup partially completed
      
      * Now text_backbone_config has its own class
      
      * Modified convert script
      
      * Removed unnecessary config attribute
      
      * Added new function to generate sub sentence mask
      
      * Renamed parameters with gamma in the name as it's currently not allowed
      
      * Removed tokenization and image_processing scripts since we'll map from existing models
      
      * Fixed some issues with configuration
      
      * Just some modifications on conversion script
      
      * Other modifications
      
      * Fix style
      
      * Improve fixup
      
      * Improve conversion script
      
      * Improve conversion script
      
      * Add GroundingDINOProcessor
      
      * More improvements
      
      * Return token type ids
      
      * something
      
      * Fix more tests
      
      * More improvements
      
      * More cleanup
      
      * More improvements
      
      * Fixed tests, improved modeling and config
      
      * More improvements and fixing tests
      
      * Improved tests and modeling
      
      * Improved tests and added image processor
      
      * Improved tests inference
      
      * More improvements
      
      * More test improvements
      
      * Fixed last test
      
      * Improved docstrings and comments
      
      * Fix style
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Better naming
      
      * Better naming
      
      * Added Copied statement
      
      * Added Copied statement
      
      * Moved param init from GroundingDINOBiMultiHeadAttention
      
      * Better naming
      
      * Fixing clamp style
      
      * Better naming
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/configuration_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Improving conversion script
      
      * Improved config
      
      * Improved naming
      
      * Improved naming again
      
      * Improved grouding-dino.md
      
      * Moved grounding dino to multimodal
      
      * Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      
      * Fixed docstrings and style
      
      * Fix docstrings
      
      * Remove timm attributes
      
      * Reorder imports
      
      * More improvements
      
      * Add Grounding DINO to pipeline
      
      * Remove model from check_repo
      
      * Added grounded post_process to GroundingDINOProcessor
      
      * Fixed style
      
      * Fixed GroundingDINOTextPrenetConfig docstrings
      
      * Aligned inputs.keys() when both image and text are passed with model_input_names
      
      * Added tests for GroundingDINOImageProcessor and GroundingDINOProcessor
      
      * Testing post_process_grounded_object_detection from GroundingDINOProcessor at test_inference_object_detection_head
      
      * Fixed order
      
      * Marked test with require_torch
      
      * Temporarily changed repo_id
      
      * More improvements
      
      * Fix style
      
      * Final improvements
      
      * Improve annotators
      
      * Fix style
      
      * Add is_torch_available
      
      * Remove type hints
      
      * vocab_tokens as one liner
      
      * Removed print statements
      
      * Renamed GroundingDINOTextPrenetConfig to GroundingDINOTextConfig
      
      * remove unnecessary comments
      
      * Removed unnecessary tests on conversion script
      
      * Renamed GroundingDINO to camel case GroundingDino
      
      * Fixed GroundingDinoProcessor docstrings
      
      * loading MSDA kernels in the modeling file
      
      * Fix copies
      
      * Replace nn.multiheadattention
      
      * Replace nn.multiheadattention
      
      * Fixed inputs for GroundingDinoMultiheadAttention & order of modules
      
      * Fixed processing to avoid messing with inputs
      
      * Added more tips for GroundingDino
      
      * Make style
      
      * Chaning name to align with SAM
      
      * Replace final nn.multiheadattention
      
      * Fix model tests
      
      * Update year, remove GenerationTesterMixin
      
      * Address comments
      
      * Address more comments
      
      * Rename TextPrenet to TextModel
      
      * Rename hidden_states
      
      * Address more comments
      
      * Address more comments
      
      * Address comment
      
      * Address more comments
      
      * Address merge
      
      * Address comment
      
      * Address comment
      
      * Address comment
      
      * Make style
      
      * Added layer norm eps to layer norms
      
      * Address more comments
      
      * More fixes
      
      * Fixed equivalence
      
      * Make fixup
      
      * Remove print statements
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Add comment
      
      * Address comment
      
      * Remove overwriting of test
      
      * Fix bbox_embed
      
      * Improve decoder_bbox_embed_share
      
      * Simplify outputs
      
      * Updated post_process_grounded_object_detection
      
      * Renamed sources to feature_maps
      
      * Improved tests for Grounding Dino ImageProcessor and Processor
      
      * Fixed test requirements and imports
      
      * Fixed image_processing
      
      * Fixed processor tests
      
      * Fixed imports for image processing tests
      
      * Fix copies
      
      * Updated modeling
      
      * Fix style
      
      * Moved functions to correct position
      
      * Fixed copy issues
      
      * Update src/transformers/models/deformable_detr/modeling_deformable_detr.py
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      
      * Keeping consistency custom cuda kernels for MSDA
      
      * Make GroundingDinoProcessor logic clearer
      
      * Updated Grounding DINO checkpoints
      
      * Changed tests to correct structure
      
      * Updated gpu-cpu equivalence test
      
      * fix copies
      
      * Update src/transformers/models/grounding_dino/processing_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/processing_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/grounding_dino/configuration_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fixed erros and style
      
      * Fix copies
      
      * Removed inheritance from PreTrainedModel from GroundingDinoTextModel
      
      * Fixed GroundingDinoTextModel
      
      * Fixed type of default backbone config
      
      * Fixed missing methods for GroundingDinoTextModel and Added timm support for GroundingDinoConvEncoder
      
      * Addressed comments
      
      * Addressed batched image processing tests
      
      * Addressed zero shot test comment
      
      * Addressed tip comment
      
      * Removed GroundingDinoTextModel from check_repo
      
      * Removed inplace masking
      
      * Addressed comments
      
      * Addressed comments
      
      * Addressed comments
      
      * Fix copies
      
      * Fixing timm test
      
      * Fixed batching equivalence test
      
      * Update docs/source/en/model_doc/grounding-dino.md
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/grounding-dino.md
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/grounding-dino.md
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      
      * Addressed more comments
      
      * Added a new comment
      
      * Reduced image size
      
      * Addressed more comments
      
      * Nits
      
      * Nits
      
      * Changed the way text_config is initialized
      
      * Update src/transformers/models/grounding_dino/processing_grounding_dino.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarNiels <niels.rogge1@gmail.com>
      Co-authored-by: default avatarRafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarEduardo Pacheco <eduardo.pacheco@limehome.com>
      Co-authored-by: default avatarSangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarTianqi Xu <40522713+dandansamax@users.noreply.github.com>
      b752ad30
  4. 10 Apr, 2024 1 commit
    • Arthur's avatar
      Add recurrent gemma (#30143) · 0fe44059
      Arthur authored
      
      
      * Fork.
      
      * RecurrentGemma initial commit.
      
      * Updating __init__.py.
      
      * Minor modification to how we initialize the cache.
      Changing how the config specifies the architecture.
      
      * Reformat code to 4 spaces.
      Fixed a few typos.
      
      * Fixed the forward pass.
      Still unclear on the cache?
      
      * Fixed the RecurrentGemmaForCausalLM
      
      * Minor comment that we might not need attention_mask and output_attention arguments.
      
      * Now cache should work as well.
      
      * Adding a temporary example to check whether the model generation works.
      
      * Adding the tests and updating imports.
      
      * Adding the example file missing in the previous commit.
      
      * First working example.
      
      * Removing .gitignore and reverting parts of __init__.
      
      * Re-add .gitignore.
      
      * Addressing comments for configuration.
      
      * Move mask creation to `_prepare_inputs_for_generation`.
      
      * First try at integration tests:
      1. AttributeError: 'GriffinCausalLMOutput' object has no attribute 'attentions'.
      2. `cache_position` not passed
      
      * Transfoering between machines.
      
      * Running normal tests.
      
      * Minor fix.
      
      * More fixes.
      
      * Addressing more comments.
      
      * Minor fixes.
      
      * first stab at cleanup
      
      * more refactoring
      
      * fix copies and else
      
      * renaming and get init to work
      
      * fix causal mask creation
      
      * update
      
      * nit
      
      * fix a hell lot of things
      
      * updates
      
      * update conversion script
      
      * make all keys importable
      
      * nits
      
      * add auto mappings
      
      * properly convert ffw_up and down
      
      * add scaling
      
      * fix generations
      
      * for recurrent dtype
      
      * update
      
      * fix going beyong window
      
      * fixup
      
      * add missing files
      
      * current updates to remove last einops
      
      * finish modeling refactor
      
      * TADA
      
      * fix compile
      
      * fix most failing testt ? ?
      
      * update tests
      
      * refactor and update
      
      * update
      
      * nits, fixup and update tests
      
      * more fixup
      
      * nits
      
      * fix imports
      
      * test format
      
      * fixups
      
      * nits
      
      * tuple typing
      
      * fix code quality
      
      * add model card
      
      * fix doc
      
      * skip most generation tests
      
      * nits
      
      * style
      
      * doc fixes
      
      * fix pr and check_copies?
      
      * last nit
      
      * oupsy
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * update
      
      * Update src/transformers/models/recurrent_gemma/convert_recurrent_gemma_to_hf.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * update based on review
      
      * doc nit
      
      * fix quality
      
      * quality
      
      * fix slow test model path
      
      * update default dype
      
      * ignore attributes that can be safely ignored in check config attributes
      
      * 0lallalala come on
      
      * save nit
      
      * style
      
      * remove to dict update
      
      * make sure we can also run in float16
      
      * style
      
      ---------
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      Co-authored-by: default avatarAleksandar Botev <botev@google.com>
      Co-authored-by: default avatarLeonard Berrada <lberrada@users.noreply.github.com>
      Co-authored-by: default avataranushanf <anushanf@google.com>
      Co-authored-by: default avatarbotev <botevmg@gmail.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      0fe44059
  5. 30 Mar, 2024 1 commit
  6. 27 Mar, 2024 1 commit
    • Bo Zheng's avatar
      Add Qwen2MoE (#29377) · 1c39974a
      Bo Zheng authored
      
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * Update README.md
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixup
      
      * fixup
      
      * add archive back
      
      * add support for qwen2 MoE models
      
      * update docs
      
      * update model name & test
      
      * update readme
      
      * update class names & readme & model_doc of Qwen2MoE.
      
      * update architecture name
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * update modeling_qwen2_moe.py
      
      * fix model architecture
      
      * fixup
      
      * fix qwen2_moe tests
      
      * use Qwen2Tokenizer instead of Qwen2MoeTokenizer
      
      * fix style
      
      * fix test when there are sparse and non sparse layers
      
      * fixup
      
      * add archive back
      
      * fix integration test
      
      * fixup
      
      ---------
      Co-authored-by: default avatarbozheng-hit <dsoul0621@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      1c39974a
  7. 20 Mar, 2024 2 commits
    • NielsRogge's avatar
      Add LLaVa-1.6, bis (#29586) · d91fd7f9
      NielsRogge authored
      
      
      * First draft
      
      * Fix tests, add docs
      
      * Improve docstrings
      
      * Fix test
      
      * Address comments
      
      * Address comments
      
      * Remove vocab_size attribute
      
      * Remove batch_size
      
      * Address comment
      
      * Add image processor tests
      
      * Support fx
      
      * Update docstring
      
      * Add support for 34b
      
      * Convert 34b model
      
      * Add integration tests
      
      * Update checkpoints
      
      * Convert vicuna-13b, remove doc tests
      
      * Remove script
      
      * Remove file
      
      * Address comments
      
      * Improve docstrings
      
      * Deprecate vocab_size
      
      * Remove aspect_ratio_setting
      
      * Address comments
      
      * Update READMEs
      
      * Add tips about chat templates
      
      * Fix tests
      
      * Deprecate vocab_size safely
      
      * Update tests
      
      ---------
      Co-authored-by: default avatarAmy Roberts <22614925+amyeroberts@users.noreply.github.com>
      d91fd7f9
    • Arthur Zucker's avatar
      v4.40.0.dev.0 · 1248f092
      Arthur Zucker authored
      1248f092
  8. 19 Mar, 2024 1 commit
    • StevenBucaille's avatar
      Implementation of SuperPoint and AutoModelForKeypointDetection (#28966) · 56baa033
      StevenBucaille authored
      
      
      * Added SuperPoint docs
      
      * Added tests
      
      * Removed commented part
      
      * Commit to create and fix add_superpoint branch with a new branch
      
      * Fixed dummy_pt_objects
      
      * Committed missing files
      
      * Fixed README.md
      
      * Apply suggestions from code review
      
      Fixed small changes
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Moved ImagePointDescriptionOutput from modeling_outputs.py to modeling_superpoint.py
      
      * Removed AutoModelForKeypointDetection and related stuff
      
      * Fixed inconsistencies in image_processing_superpoint.py
      
      * Moved infer_on_model logic simply in test_inference
      
      * Fixed bugs, added labels to forward method with checks whether it is properly a None value, also added tests about this logic in test_modeling_superpoint.py
      
      * Added tests to SuperPointImageProcessor to ensure that images are properly converted to grayscale
      
      * Removed remaining mentions of MODEL_FOR_KEYPOINT_DETECTION_MAPPING
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fixed from (w, h) to (h, w) as input for tests
      
      * Removed unnecessary condition
      
      * Moved last_hidden_state to be the first returned
      
      * Moved last_hidden_state to be the first returned (bis)
      
      * Moved last_hidden_state to be the first returned (ter)
      
      * Switched image_width and image_height in tests to match recent changes
      
      * Added config as first SuperPointConvBlock init argument
      
      * Reordered README's after merge
      
      * Added missing first config argument to SuperPointConvBlock instantiations
      
      * Removed formatting error
      
      * Added SuperPoint to README's de, pt-br, ru, te and vi
      
      * Checked out README_fr.md
      
      * Fixed README_fr.md
      
      * Test fix README_fr.md
      
      * Test fix README_fr.md
      
      * Last make fix-copies !
      
      * Updated checkpoint path
      
      * Removed unused SuperPoint doc
      
      * Added missing image
      
      * Update src/transformers/models/superpoint/modeling_superpoint.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Removed unnecessary import
      
      * Update src/transformers/models/superpoint/modeling_superpoint.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Added SuperPoint to _toctree.yml
      
      ---------
      Co-authored-by: default avatarsteven <steven.bucaillle@gmail.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarSteven Bucaille <steven.bucaille@buawei.com>
      56baa033
  9. 18 Mar, 2024 1 commit
    • Yoach Lacombe's avatar
      Add MusicGen Melody (#28819) · c43b380e
      Yoach Lacombe authored
      
      
      * first modeling code
      
      * make repository
      
      * still WIP
      
      * update model
      
      * add tests
      
      * add latest change
      
      * clean docstrings and copied from
      
      * update docstrings md and readme
      
      * correct chroma function
      
      * correct copied from and remove unreleated test
      
      * add doc to toctree
      
      * correct imports
      
      * add convert script to notdoctested
      
      * Add suggestion from Sanchit
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * correct get_uncoditional_inputs docstrings
      
      * modify README according to SANCHIT feedback
      
      * add chroma to audio utils
      
      * clean librosa and torchaudio hard dependencies
      
      * fix FE
      
      * refactor audio decoder -> audio encoder for consistency with previous musicgen
      
      * refactor conditional -> encoder
      
      * modify sampling rate logics
      
      * modify license at the beginning
      
      * refactor all_self_attns->all_attentions
      
      * remove ignore copy from causallm generate
      
      * add copied from for from_sub_models
      
      * fix make copies
      
      * add warning if audio is truncated
      
      * add copied from where relevant
      
      * remove artefact
      
      * fix convert script
      
      * fix torchaudio and FE
      
      * modify chroma method according to feedback-> better naming
      
      * refactor input_values->input_features
      
      * refactor input_values->input_features and fix import fe
      
      * add input_features to docstrigs
      
      * correct inputs_embeds logics
      
      * remove dtype conversion
      
      * refactor _prepare_conditional_hidden_states_kwargs_for_generation ->_prepare_encoder_hidden_states_kwargs_for_generation
      
      * change warning for chroma length
      
      * Update src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * change way to save wav, using soundfile
      
      * correct docs and change to soundfile
      
      * fix import
      
      * fix init proj layers
      
      * remove line breaks from md
      
      * fix issue with docstrings
      
      * add FE suggestions
      
      * improve is in logics and remove useless imports
      
      * remove custom from_pretrained
      
      * simplify docstring code
      
      * add suggestions for modeling tests
      
      * make style
      
      * update converting script with sanity check
      
      * remove encoder attention mask from conditional generation
      
      * replace musicgen melody checkpoints with official orga
      
      * rename ylacombe->facebook in checkpoints
      
      * fix copies
      
      * remove unecessary warning
      
      * add shape in code docstrings
      
      * add files to slow doc tests
      
      * fix md bug and add md to not_tested
      
      * make fix-copies
      
      * fix hidden states test and batching
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      c43b380e
  10. 15 Mar, 2024 1 commit
    • Saurabh Dash's avatar
      Cohere Model Release (#29622) · 0e4a1c34
      Saurabh Dash authored
      
      
      * Cohere Model Release (#1)
      
      Cohere Model Release
      
      * Remove unnecessary files and code (#2)
      
      Some cleanup
      
      * Delete cohere-model directory (#3)
      
      * Make Fix (#5)
      
      * Pr fixes (#6)
      
      * fixes for pr
      
      * pr fixes for the format
      
      * pr fixes for the format
      
      * src/transformers/models/auto/tokenization_auto.py
      
      * Tokenizer test (#8)
      
      * tokenizer test
      
      * format fix
      
      * Adding Docs and other minor changes (#7)
      
      * Add modeling tests (#9)
      
      * Smol Fix (#11)
      
      * tokenization tests are fixed
      
      * format fixes
      
      * fix pr doc tests
      
      * fix pr doc tests
      
      * fix pr doc tests
      
      * fix pr style check
      
      * small changes in cohere.md
      
      * FIX: Address final comments for transformers integration (#13)
      
      * fix modeling final nits and add proper test file
      
      * for now leave empty tests
      
      * add integration test
      
      * push new test
      
      * fix modeling cohere (#14)
      
      * Update chat templates to use the new API (#15)
      
      ---------
      Co-authored-by: default avatarahmetustun <ahmetustun89@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      0e4a1c34
  11. 14 Mar, 2024 1 commit
  12. 11 Mar, 2024 2 commits
  13. 26 Feb, 2024 1 commit
  14. 16 Feb, 2024 1 commit
  15. 09 Feb, 2024 1 commit
  16. 06 Feb, 2024 1 commit
    • Klaus Hipp's avatar
      [Docs] Add missing language options and fix broken links (#28852) · 1c31b7aa
      Klaus Hipp authored
      * Add missing entries to the language selector
      
      * Add links to the Colab and AWS Studio notebooks for ONNX
      
      * Use anchor links in CONTRIBUTING.md
      
      * Fix broken hyperlinks due to spaces
      
      * Fix links to OpenAI research articles
      
      * Remove confusing footnote symbols from author names, as they are also considered invalid markup
      1c31b7aa
  17. 29 Jan, 2024 1 commit
  18. 25 Jan, 2024 1 commit
    • NielsRogge's avatar
      Add Depth Anything (#28654) · 963db81a
      NielsRogge authored
      * First draft
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Add docs
      
      * Remove file
      
      * Add copied from
      
      * Address comments
      
      * Address comments
      
      * Address comments
      
      * Fix style
      
      * Update docs
      
      * Convert all checkpoints, add integration test
      
      * Rename checkpoints
      
      * Add pretrained backbone attributes
      
      * Fix default config
      
      * Address comment
      
      * Add figure to docs
      
      * Fix bug thanks to @xenova
      
      * Update conversion script
      
      * Fix integration test
      963db81a
  19. 19 Jan, 2024 1 commit
  20. 18 Jan, 2024 1 commit
    • Yoach Lacombe's avatar
      Add new meta w2v2-conformer BERT-like model (#28165) · d2cdefb9
      Yoach Lacombe authored
      
      
      * first commit
      
      * correct default value non causal
      
      * update config and modeling code
      
      * update converting checkpoint
      
      * clean modeling and fix tests
      
      * make style
      
      * add new config parameters to docstring
      
      * fix copied from statements
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * make position_embeddings_type docstrings clearer
      
      * clean converting script
      
      * remove function not used
      
      * clean modeling file
      
      * apply suggestion for test file + add convert script to not_doctested
      
      * modify tests according to review - cleaner logic and more tests
      
      * Apply nit suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add checker of valid position embeddings type
      
      * instantiate new layer norm layer with the right eps
      
      * fix freeze_feature_encoder since it can be None in some cases
      
      * add test same output in convert script
      
      * restore wav2vec2conformer and add new model
      
      * create processor and FE + clean
      
      * add new model code
      
      * fix convert script and set default config parameters
      
      * correct model id paths
      
      * make style
      
      * make fix-copies and cleaning files
      
      * fix copied from statements
      
      * complete .md and fixe copies
      
      * clean convert script argument defaults
      
      * fix config parameters docstrings
      
      * fix config docstring
      
      * add copied from and enrich FE tests
      
      * fix copied from and repo-consistency
      
      * add autotokenizer
      
      * make test input length shorter and change docstring code
      
      * fix docstrings and copied from
      
      * add add_adapter to ASR training example
      
      * make testing of adapters more robust
      
      * adapt to multi adapter layers
      
      * refactor input_values->input_features and remove w2v2-bert feature extractor
      
      * remove pretraining model
      
      * remove depreciated features and useless lines
      
      * add copied from and ignore statements to modeling tests
      
      * remove pretraining model #2
      
      * change import in convert script
      
      * change default in convert script
      
      * update readme and remove useless line
      
      * Update tests/models/wav2vec2_bert/test_processor_wav2vec2_bert.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * refactor BERT to Bert for consistency
      
      * remove useless ignore copy statement
      
      * add persistent to buffer in rotary
      
      * add eps in LayerNorm init and remove copied from
      
      * add adapter activation parameters and add copied from statements
      
      * Fix copied statements and add unitest.skip reasons
      
      * add copied statement in test_processor
      
      * refactor processor
      
      * make style
      
      * replace numpy random by torch rand
      
      * remove expected output CTC
      
      * improve converting script with processor class
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * remove gumbel class
      
      * remove tests related to previously deleted class
      
      * Update src/transformers/models/wav2vec2_bert/configuration_wav2vec2_bert.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * correct typos
      
      * remove uused parameters
      
      * update processor to takes both text and audio
      
      * update checkpoints
      
      * update expected output and add ctc expected output
      
      * add label_attention_mask
      
      * replace pt with np in processor tests
      
      * fix typo
      
      * revert to behaviour with labels_attention_mask
      
      ---------
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      d2cdefb9
  21. 17 Jan, 2024 1 commit
    • Junyang Lin's avatar
      Add qwen2 (#28436) · d6ffe74d
      Junyang Lin authored
      
      
      * add config, modeling, and tokenization
      
      * add auto and init
      
      * update readme
      
      * update readme
      
      * update team name
      
      * fixup
      
      * fixup
      
      * update config
      
      * update code style
      
      * update for fixup
      
      * update for fixup
      
      * update for fixup
      
      * update for testing
      
      * update for testing
      
      * fix bug for config and tokenization
      
      * fix bug for bos token
      
      * not doctest
      
      * debug tokenizer
      
      * not doctest
      
      * debug tokenization
      
      * debug init for tokenizer
      
      * fix style
      
      * update init
      
      * delete if in token auto
      
      * add tokenizer doc
      
      * add tokenizer in init
      
      * Update dummy_tokenizers_objects.py
      
      * update
      
      * update
      
      * debug
      
      * Update tokenization_qwen2.py
      
      * debug
      
      * Update convert_slow_tokenizer.py
      
      * add copies
      
      * add copied from and make style
      
      * update files map
      
      * update test
      
      * fix style
      
      * fix merge reading and update tests
      
      * fix tests
      
      * fix tests
      
      * fix style
      
      * debug a variable in readme
      
      * Update src/transformers/models/qwen2/configuration_qwen2.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * update test and copied from
      
      * fix style
      
      * update qwen2 tokenization  and tests
      
      * Update tokenization_qwen2.py
      
      * delete the copied from after property
      
      * fix style
      
      * update tests
      
      * update tests
      
      * add copied from
      
      * fix bugs
      
      * update doc
      
      * add warning for sliding window attention
      
      * update qwen2 tokenization
      
      * fix style
      
      * Update src/transformers/models/qwen2/modeling_qwen2.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix tokenizer fast
      
      ---------
      Co-authored-by: default avatarRen Xuancheng <jklj077@users.noreply.github.com>
      Co-authored-by: default avatarrenxuancheng.rxc <renxuancheng.rxc@alibaba-inc.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      d6ffe74d
  22. 11 Jan, 2024 1 commit
  23. 10 Jan, 2024 1 commit
  24. 08 Jan, 2024 1 commit
    • NielsRogge's avatar
      Add SigLIP (#26522) · 3b742ea8
      NielsRogge authored
      
      
      * Add first draft
      
      * Use appropriate gelu function
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Convert checkpoint
      
      * More improvements
      
      * Improve docs, remove print statements
      
      * More improvements
      
      * Add link
      
      * remove unused masking function
      
      * begin tokenizer
      
      * do_lower_case
      
      * debug
      
      * set split_special_tokens=True
      
      * Remove script
      
      * Fix style
      
      * Fix rebase
      
      * Use same design as CLIP
      
      * Add fast tokenizer
      
      * Add SiglipTokenizer to init, remove extra_ids
      
      * Improve conversion script
      
      * Use smaller inputs in conversion script
      
      * Update conversion script
      
      * More improvements
      
      * Add processor to conversion script
      
      * Add tests
      
      * Remove print statements
      
      * Add tokenizer tests
      
      * Fix more tests
      
      * More improvements related to weight initialization
      
      * More improvements
      
      * Make more tests pass
      
      * More improvements
      
      * More improvements
      
      * Add copied from
      
      * Add canonicalize_text
      
      * Enable fast tokenizer tests
      
      * More improvements
      
      * Fix most slow tokenizer tests
      
      * Address comments
      
      * Fix style
      
      * Remove script
      
      * Address some comments
      
      * Add copied from to tests
      
      * Add more copied from
      
      * Add more copied from
      
      * Add more copied from
      
      * Remove is_flax_available
      
      * More updates
      
      * Address comment
      
      * Remove SiglipTokenizerFast for now
      
      * Add caching
      
      * Remove umt5 test
      
      * Add canonicalize_text inside _tokenize, thanks Arthur
      
      * Fix image processor tests
      
      * Skip tests which are not applicable
      
      * Skip test_initialization
      
      * More improvements
      
      * Compare pixel values
      
      * Fix doc tests, add integration test
      
      * Add do_normalize
      
      * Remove causal mask and leverage ignore copy
      
      * Fix attention_mask
      
      * Fix remaining tests
      
      * Fix dummies
      
      * Rename temperature and bias
      
      * Address comments
      
      * Add copied from to tokenizer tests
      
      * Add SiglipVisionModel to auto mapping
      
      * Add copied from to image processor tests
      
      * Improve doc
      
      * Remove SiglipVisionModel from index
      
      * Address comments
      
      * Improve docs
      
      * Simplify config
      
      * Add first draft
      
      * Make it like mistral
      
      * More improvements
      
      * Fix attention_mask
      
      * Fix output_attentions
      
      * Add note in docs
      
      * Convert multilingual model
      
      * Convert large checkpoint
      
      * Convert more checkpoints
      
      * Add pipeline support, correct image_mean and image_std
      
      * Use padding=max_length by default
      
      * Make processor like llava
      
      * Add code snippet
      
      * Convert more checkpoints
      
      * Set keep_punctuation_string=None as in OpenCLIP
      
      * Set normalized=False for special tokens
      
      * Fix doc test
      
      * Update integration test
      
      * Add figure
      
      * Update organization
      
      * Happy new year
      
      * Use AutoModel everywhere
      
      ---------
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      3b742ea8
  25. 04 Jan, 2024 1 commit
  26. 03 Jan, 2024 1 commit
    • Connor Henderson's avatar
      Add FastSpeech2Conformer (#23439) · d83ff5ee
      Connor Henderson authored
      * start - docs, SpeechT5 copy and rename
      
      * add relevant code from FastSpeech2 draft, have tests pass
      
      * make it an actual conformer, demo ex.
      
      * matching inference with original repo, includes debug code
      
      * refactor nn.Sequentials, start more desc. var names
      
      * more renaming
      
      * more renaming
      
      * vocoder scratchwork
      
      * matching vocoder outputs
      
      * hifigan vocoder conversion script
      
      * convert model script, rename some config vars
      
      * replace postnet with speecht5's implementation
      
      * passing common tests, file cleanup
      
      * expand testing, add output hidden states and attention
      
      * tokenizer + passing tokenizer tests
      
      * variety of updates and tests
      
      * g2p_en pckg setup
      
      * import structure edits
      
      * docstrings and cleanup
      
      * repo consistency
      
      * deps
      
      * small cleanup
      
      * forward signature param order
      
      * address comments except for masks and labels
      
      * address comments on attention_mask and labels
      
      * address second round of comments
      
      * remove old unneeded line
      
      * address comments part 1
      
      * address comments pt 2
      
      * rename auto mapping
      
      * fixes for failing tests
      
      * address comments part 3 (bart-like, train loss)
      
      * make style
      
      * pass config where possible
      
      * add forward method + tests to WithHifiGan model
      
      * make style
      
      * address arg passing and generate_speech comments
      
      * address Arthur comments
      
      * address Arthur comments pt2
      
      * lint  changes
      
      * Sanchit comment
      
      * add g2p-en to doctest deps
      
      * move up self.encoder
      
      * onnx compatible tensor method
      
      * fix is symbolic
      
      * fix paper url
      
      * move models to espnet org
      
      * make style
      
      * make fix-copies
      
      * update docstring
      
      * Arthur comments
      
      * update docstring w/ new updates
      
      * add model architecture images
      
      * header size
      
      * md wording update
      
      * make style
      d83ff5ee
  27. 13 Dec, 2023 2 commits
    • Lysandre's avatar
      Dev version · 3ed3e319
      Lysandre authored
      3ed3e319
    • Younes Belkada's avatar
      Adds VIP-llava to transformers (#27932) · c7f076a0
      Younes Belkada authored
      * v1
      
      * add-new-model-like
      
      * revert
      
      * fix forward and conversion script
      
      * revert
      
      * fix copies
      
      * fixup
      
      * fix
      
      * Update docs/source/en/index.md
      
      * Apply suggestions from code review
      
      * push
      
      * fix
      
      * fixes here and there
      
      * up
      
      * fixup and fix tests
      
      * Apply suggestions from code review
      
      * add docs
      
      * fixup
      
      * fixes
      
      * docstring
      
      * add docstring
      
      * fixup
      
      * docstring
      
      * fixup
      
      * nit
      
      * docs
      
      * more copies
      
      * fix copies
      
      * nit
      
      * update test
      c7f076a0
  28. 11 Dec, 2023 2 commits
    • Arthur's avatar
      [`Add Mixtral`] Adds support for the Mixtral MoE (#27942) · accccdd0
      Arthur authored
      
      
      * up
      
      * up
      
      * test
      
      * logits ok
      
      * up
      
      * up
      
      * few fixes
      
      * conversion script
      
      * up
      
      * nits
      
      * nits
      
      * update
      
      * nuke
      
      * more updates
      
      * nites
      
      * fix many issues
      
      * nit
      
      * scatter
      
      * nit
      
      * nuke megablocks
      
      * nits
      
      * fix conversion script
      
      * nit
      
      * remove
      
      * nits
      
      * nit
      
      * update
      
      * oupsssss
      
      * change
      
      * nits device
      
      * nits
      
      * fixup
      
      * update
      
      * merge
      
      * add copied from
      
      * fix the copy mentions
      
      * update tests
      
      * more fixes
      
      * nits
      
      * conversion script
      
      * add parts of the readme
      
      * Update tests/models/mixtral/test_modeling_mixtral.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * new test + conversion script
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Apply suggestions from code review
      
      * fix
      
      * fix copies
      
      * fix copies
      
      * ooops
      
      * fix config
      
      * Apply suggestions from code review
      
      * fix nits
      
      * nit
      
      * add copies
      
      * add batched tests
      
      * docs
      
      * fix flash attention
      
      * let's add more verbose
      
      * add correct outputs
      
      * support router ouptus
      
      * ignore copies where needed
      
      * fix
      
      * cat list if list is given for now
      
      * nits
      
      * Update docs/source/en/model_doc/mixtral.md
      
      * finish router refactoring
      
      * fix forward
      
      * fix expected values
      
      * nits
      
      * fixup
      
      * fix
      
      * fix bug
      
      * fix
      
      * fix dtype mismatch
      
      * fix
      
      * grrr grrr I support item assignment
      
      * fix CI
      
      * docs
      
      * fixup
      
      * remove some copied form
      
      * fix weird diff
      
      * skip doctest fast on the config and modeling
      
      * mark that is supports flash attention in the doc
      
      * update
      
      * Update src/transformers/models/mixtral/modeling_mixtral.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Update docs/source/en/model_doc/mixtral.md
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * revert router logits config issue
      
      * update doc accordingly
      
      * Update src/transformers/models/mixtral/convert_mixtral_weights_to_hf.py
      
      * nits
      
      * use torch testing asssert close
      
      * fixup
      
      * doc nits
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      accccdd0
    • NielsRogge's avatar
      [LLaVa] Some improvements (#27895) · 7ea21f1f
      NielsRogge authored
      * More improvements
      
      * Improve variable names
      
      * Update READMEs, improve docs
      7ea21f1f
  29. 07 Dec, 2023 1 commit
    • Younes Belkada's avatar
      [`Llava`]聽Add Llava to transformers (#27662) · 44b5506d
      Younes Belkada authored
      * add model like
      
      * logits match
      
      * minor fixes
      
      * fixes
      
      * up
      
      * up
      
      * add todo
      
      * llava processor
      
      * keep the processor simple
      
      * add conversion script
      
      * fixup
      
      * fix copies
      
      * up
      
      * add to index
      
      * fix config + logits
      
      * fix
      
      * refactor
      
      * more refactor
      
      * more refactor
      
      * fix copies
      
      * add authors
      
      * v1 tests
      
      * add `LlavaProcessor` in init
      
      * remove unneeded import
      
      * up
      
      * up
      
      * docs
      
      * up
      
      * fix CI
      
      * fix CI
      
      * add attention  mask in test
      
      * make fixup
      
      * remove the vision model
      
      * that' s the dirty way to do it
      
      * nits
      
      * nits
      
      * updates
      
      * add more tests
      
      * add input tests
      
      * fixup
      
      * more styling
      
      * nits
      
      * updates amd cleanup
      
      * fixup the generation expected results
      
      * fix the testing script
      
      * some cleanup and simplification which does not work yet but almost there!
      
      * make correct dispatch operations
      
      * vectorize works for batch of images and text
      
      * last todos
      
      * nits
      
      * update test and modeling code
      
      * remove useless function for now
      
      * fix few issues
      
      * fix generation
      
      * some nits
      
      * add bakllava
      
      * nits
      
      * remove duplicated code
      
      * finis merge
      
      * cleanup
      
      * missed this line
      
      * fill the todos
      
      * add left padding offset
      
      * add left and rignt padding logic
      
      * bool to properly index
      
      * make sure
      
      * more cleanups
      
      * batch is fixed 馃槈
      
      
      
      * add correct device for tensor creation
      
      * fix some dtype missmatch
      
      * ruff
      
      * update conversion script
      
      * Update src/transformers/__init__.py
      
      * fa 2 support + fix conversion script
      
      * more
      
      * correct reshaping
      
      * fix test dict
      
      * fix copies by ignoring
      
      * fix nit
      
      * skip clip vision model
      
      * fixup
      
      * fixup
      
      * LlavaForVisionText2Text -> LlavaForCausalLM
      
      * update
      
      * fix
      
      * raise correct errors
      
      * fix
      
      * docs
      
      * nuke for now
      
      * nits here and there
      
      * fixup
      
      * fix remaining tests
      
      * update LlavaForConditionalGeneration instead of CausalLM
      
      * fixups
      
      * pipeline support
      
      * slow and piepline tests
      
      * supports batch
      
      * nits
      
      * cleanup
      
      * fix first integration tests
      
      * add pad token where needed
      
      * correct etsts
      
      * fixups
      
      * update pipeline testr
      
      * fix quality
      
      * nits
      
      * revert unneeded change
      
      * nit
      
      * use BatchFeature
      
      * from ...feature_extraction_utils import BatchFeature
      
      * nits
      
      * nits
      
      * properly update
      
      * more f*** nits
      
      * fix copies
      
      * comment
      
      * keep slow test slow
      
      * Update src/transformers/models/llava/processing_llava.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * add piepline example
      
      * add pixel values in docstrign
      
      * update pr doctest
      
      * fix
      
      * fix slow tests
      
      * remove hack
      
      * fixup
      
      * small note
      
      * forward contrib credits from PR25789
      
      * forward contrib credits from original implementation and work
      
      * add arthur
      
      * Update src/transformers/models/llava/processing_llava.py
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * update docstring
      
      * nit
      
      * move to not doctested because of timeout issues
      
      * fixup
      
      * add description
      
      * more
      
      * fix-copies
      
      * fix docs
      
      * add beam search
      
      * add more comments
      
      * add typehints on processor
      
      * add speedup plot
      
      * update slow tests and docs
      
      * push test
      
      * push batched test
      
      * fix batched generation with different number of images
      
      * remove benchmark due to a bug
      
      * fix test
      
      * fix copies
      
      * add gcolab demo
      
      ---------
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarshauray8 <shauray8@users.noreply.github.com>
      Co-authored-by: default avatarhaotian-liu <haotian-liu@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      44b5506d
  30. 05 Dec, 2023 1 commit
    • Arindam Jati's avatar
      [Time series] Add PatchTSMixer (#26247) · b242d0f2
      Arindam Jati authored
      
      
      * patchtsmixer initial commit
      
      * x,y->context_values,target_values, unittest addded
      
      * cleanup code
      
      * minor
      
      * return hidden states
      
      * model tests, partial integration tests
      
      * ettm notebook temporary
      
      * minor
      
      * config mask bug fix, tests updated
      
      * final ETT notebooks
      
      * add selfattn
      
      * init
      
      * added docstrings
      
      * PatchTSMixerForPretraining -> PatchTSMixerForMaskPretraining
      
      * functionality tests added
      
      * add start and input docstrings
      
      * docstring edits
      
      * testcase edits
      
      * minor changes
      
      * docstring error fixed
      
      * ran make fixup
      
      * finalize integration tests and docs
      
      * minor
      
      * cleaned gitignore
      
      * added dataclass decorator, ran black formatter
      
      * ran ruff
      
      * formatting
      
      * add slow decorator
      
      * renamed in_Channel to input_size and default to 1
      
      * shorten dataclass names
      
      * use smaller model for testing
      
      * moved the 3 heads to the modeling file
      
      * use scalers instead of revin
      
      * support forecast_channel_indices
      
      * fix regression scaling
      
      * undo reg. scaling
      
      * removed unneeded classes
      
      * forgot missing
      
      * add more layers
      
      * add copied positional_encoding
      
      * use patchmask from patchtst
      
      * removed dependency on layers directory
      
      * formatting
      
      * set seed
      
      * removed unused imports
      
      * fixed forward signature test
      
      * adding distributional head for PatchTSMixerForecasting
      
      * add generate to forecast
      
      * testcases for generate
      
      * add generate and distributional head for regression
      
      * raise Exception for negative values for neg binominal distribution
      
      * formatting changes
      
      * remove copied from patchtst and add TODO for test passing
      
      * make copies
      
      * doc edits
      
      * minor changes
      
      * format issues
      
      * minor changes
      
      * minor changes
      
      * format docstring
      
      * change some class names to PatchTSMixer + class name
      
      Transpose to PatchTSMixerTranspose
      GatedAttention to PatchTSMixerGatedAttention
      
      * change NormLayer to PatchTSMixerNormLayer
      
      * change MLP to PatchTSMixerMLP
      
      * change PatchMixer to PatchMixerBlock, FeatureMixer to FeatureMixerBlock
      
      * change ChannelFeatureMixer to ChannelFeatureMixerBlock
      
      * change PatchMasking to PatchTSMixerMasking
      
      * change Patchify to PatchTSMixerPatchify
      
      * list to `list`
      
      * fix docstrings
      
      * formatting
      
      * change bs to batch_size, edit forecast_masking
      
      * edit random_masking
      
      * change variable name and update docstring in PatchTSMixerMasking
      
      * change variable name and update docstring in InjectScalerStatistics4D
      
      * update forward call in PatchTSMixerTranspose
      
      * change variable name and update docstring in PatchTSMixerNormLayer
      
      * change variable name and update docstring in PatchTSMixerMLP
      
      * change variable name and update docstring in ChannelFeatureMixerBlock
      
      * formatting
      
      * formatting issues
      
      * docstring issue
      
      * fixed observed_mask type in docstrings
      
      * use FloatTensor type
      
      * formatting
      
      * fix rescaling issue in forecasting, fixed integration tests
      
      * add docstring from decorator
      
      * fix docstring
      
      * Update README.md
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/configuration_patchtsmixer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/configuration_patchtsmixer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * PatchTSMixerChannelFeatureMixerBlock
      
      * formatting
      
      * ForPretraining
      
      * use num_labels instead of n_classes
      
      * remove commented out code
      
      * docstring fixed
      
      * nn.functional used instead of one letter F
      
      * x_tmp renamed
      
      * one letter variable x removed from forward calls
      
      * one letter variable y removed
      
      * remove commented code
      
      * rename patch_size, in_channels, PatchTSMixerBackbone
      
      * add config to heads
      
      * add config to heads tests
      
      * code reafactoring to use config instead of passing individual params
      
      * Cdocstring fixes part 1
      
      * docstring fixes part 2
      
      * removed logger.debug
      
      * context_values -> past_values
      
      * formatting changes
      
      * pe -> positional_encoding
      
      * removed unused target variable
      
      * self.mode logic fixed
      
      * formatting change
      
      * edit docstring and var name
      
      * change n_targets to num_targets
      
      * rename input_size to num_input_channels
      
      * add head names with prefix PatchTSMixer
      
      * edit docstring in PatchTSMixerForRegression
      
      * fix var name change in testcases
      
      * add PatchTSMixerAttention
      
      * return dict for all exposed classes, test cases added
      
      * format
      
      * move loss function to forward call
      
      * make style
      
      * adding return dict/tuple
      
      * make repo-consistency
      
      * remove flatten mode
      
      * code refactoring
      
      * rename data
      
      * remove PatchTSMixer and keep only PatchTSMixerEncoder
      
      * docstring fixes
      
      * removed unused code
      
      * format
      
      * format
      
      * remove contiguous and formatting changes
      
      * remove model description from config
      
      * replace asserts with ValueError
      
      * remove nn.Sequential from PatchTSMixerNormLayer
      
      * replace if-else with map
      
      * remove all nn.Sequential
      
      * format
      
      * formatting
      
      * fix gradient_checkpointing error after merge, and formatting
      
      * make fix-copies
      
      * remove comments
      
      * reshape
      
      * doesnt support gradient checkpointing
      
      * corect Patchify
      
      * masking updates
      
      * batchnorm copy from
      
      * format checks
      
      * scaler edits
      
      * remove comments
      
      * format changes
      
      * remove self.config
      
      * correct class PatchTSMixerMLP(nn.Module):
      
      * makr fix
      
      * doc updates
      
      * fix-copies
      
      * scaler class correction
      
      * doc edits
      
      * scaler edits
      
      * update readme with links
      
      * injectstatistics add
      
      * fix-copies
      
      * add norm_eps option to LayerNorm
      
      * format changes
      
      * fix copies
      
      * correct make copies
      
      * use parametrize
      
      * fix doc string
      
      * add docs to toctree
      
      * make style
      
      * doc segmenting
      
      * docstring edit
      
      * change forecast to prediction
      
      * edit doc
      
      * doc edits
      
      * remove PatchTSMixerTranspose
      
      * add PatchTSMixerPositionalEncoding and init position_enc
      
      * remove positional_encoding
      
      * edit forecast_masking, remove forecast_mask_ratios
      
      * fix broken code
      
      * var rename target_values -> future_values
      
      * num_features -> d_model
      
      * fix broken code after master merge
      
      * repo consistency
      
      * use postional embedding
      
      * prediction_logits -> prediction_outputs, make fix-copies
      
      * uncommented @slow
      
      * minor changes
      
      * loss first in tuple
      
      * tuple and dict same ordering
      
      * style edits
      
      * minor changes
      
      * dict/tuple consistent enablement
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update tests/models/patchtsmixer/test_modeling_patchtsmixer.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix formatting
      
      * formatting
      
      * usage tip
      
      * test on cpu only
      
      * add sample usage
      
      * change PatchTSMixerForClassification to PatchTSMixerForTimeSeriesClassification
      
      * push changes
      
      * fix copies
      
      * std scaling set to default True case
      
      * minor changes
      
      * stylechanges
      
      ---------
      Co-authored-by: default avatarArindam Jati <arindam.jati@ibm.com>
      Co-authored-by: default avatarvijaye12 <vijaye12@in.ibm.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarnnguyen <nnguyen@us.ibm.com>
      Co-authored-by: default avatarvijaye12 <vijaykr.e@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarNam Nguyen <namctin@gmail.com>
      Co-authored-by: default avatarWesley Gifford <79663411+wgifford@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      b242d0f2
  31. 01 Dec, 2023 1 commit
  32. 30 Nov, 2023 1 commit
    • Yoach Lacombe's avatar
      Add SeamlessM4T v2 (#27779) · 29f1aee3
      Yoach Lacombe authored
      
      
      * add working convertion script
      
      * first non-working version of modeling code
      
      * update modeling code (working)
      
      * make style
      
      * make fix-copies
      
      * add config docstrings
      
      * add config to ignore docstrings formatage due to unconventional markdown
      
      * fix copies
      
      * fix generation num_return_sequences
      
      * enrich docs
      
      * add and fix tests beside integration tests
      
      * update integration tests
      
      * update repo id
      
      * add tie weights and make style
      
      * correct naming in .md
      
      * fix imports and so on
      
      * correct docstrings
      
      * fix fp16 speech forward
      
      * fix speechencoder attention
      
      * make style
      
      * fix copied from
      
      * rename SeamlessM4Tv2-v2 to SeamlessM4Tv2
      
      * Apply suggestions on configuration
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * remove useless public models
      
      * fix private models + better naming for T2U models
      
      * clean speech encoder relative position embeddings
      
      * refactor chunk attention
      
      * add docstrings to chunk attention method
      
      * improve naming and docstrings
      
      * rename some attention variables + add temperature sampling in T2U model
      
      * rename DOCSTRINGS variable names
      
      * make style + remove 2 useless config parameters
      
      * enrich model card
      
      * remove any attention_head reference + fix temperature in T2U
      
      * new fmt and make style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * rename spkr_id->speaker_id and change docstrings of get_char_input_ids
      
      * simplify v2attention
      
      * make style
      
      * Update seamless_m4t_v2.md
      
      * update code and tests with last update
      
      * update repo ids
      
      * fill article name, abstract andauthors
      
      * update not_doctested and slow_doc tests
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      29f1aee3
  33. 29 Nov, 2023 1 commit
    • Kashif Rasul's avatar
      [Time series] Add patchtst (#27581) · af8acc47
      Kashif Rasul authored
      
      
      * add distribution head to forecasting
      
      * formatting
      
      * Add generate function for forecasting
      
      * Add generate function to prediction task
      
      * formatting
      
      * use argsort
      
      * add past_observed_mask ordering
      
      * fix arguments
      
      * docs
      
      * add back test_model_outputs_equivalence test
      
      * formatting
      
      * cleanup
      
      * formatting
      
      * use ACT2CLS
      
      * formatting
      
      * fix add_start_docstrings decorator
      
      * add distribution head and generate function to regression task
      
      add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput,  PatchTSTForRegressionOutput.
      
      * add distribution head and generate function to regression task
      
      add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput,  PatchTSTForRegressionOutput.
      
      * fix typos
      
      * add forecast_masking
      
      * fixed tests
      
      * use set_seed
      
      * fix doc test
      
      * formatting
      
      * Update docs/source/en/model_doc/patchtst.md
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * better var names
      
      * rename PatchTSTTranspose
      
      * fix argument names and docs string
      
      * remove compute_num_patches and unused class
      
      * remove assert
      
      * renamed to PatchTSTMasking
      
      * use num_labels for classification
      
      * use num_labels
      
      * use default num_labels from super class
      
      * move model_type after docstring
      
      * renamed PatchTSTForMaskPretraining
      
      * bs -> batch_size
      
      * more review fixes
      
      * use hidden_state
      
      * rename encoder layer and block class
      
      * remove commented seed_number
      
      * edit docstring
      
      * Add docstring
      
      * formatting
      
      * use past_observed_mask
      
      * doc suggestion
      
      * make fix-copies
      
      * use Args:
      
      * add docstring
      
      * add docstring
      
      * change some variable names and add PatchTST before some class names
      
      * formatting
      
      * fix argument types
      
      * fix tests
      
      * change x variable to patch_input
      
      * format
      
      * formatting
      
      * fix-copies
      
      * Update tests/models/patchtst/test_modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * move loss to forward
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * formatting
      
      * fix a bug when pre_norm is set to True
      
      * output_hidden_states is set to False as default
      
      * set pre_norm=True as default
      
      * format docstring
      
      * format
      
      * output_hidden_states is None by default
      
      * add missing docs
      
      * better var names
      
      * docstring: remove default to False in output_hidden_states
      
      * change labels name to target_values in regression task
      
      * format
      
      * fix tests
      
      * change to forecast_mask_ratios and random_mask_ratio
      
      * change mask names
      
      * change future_values to target_values param in the prediction class
      
      * remove nn.Sequential and make PatchTSTBatchNorm class
      
      * black
      
      * fix argument name for prediction
      
      * add output_attentions option
      
      * add output_attentions to PatchTSTEncoder
      
      * formatting
      
      * Add attention output option to all classes
      
      * Remove PatchTSTEncoderBlock
      
      * create PatchTSTEmbedding class
      
      * use config in PatchTSTPatchify
      
      * Use config in PatchTSTMasking class
      
      * add channel_attn_weights
      
      * Add PatchTSTScaler class
      
      * add output_attentions arg to test function
      
      * format
      
      * Update doc with image patchtst.md
      
      * fix-copies
      
      * rename Forecast <-> Prediction
      
      * change name of a few parameters to match with PatchTSMixer.
      
      * Remove *ForForecasting class to match with other time series models.
      
      * make style
      
      * Remove PatchTSTForForecasting in the test
      
      * remove PatchTSTForForecastingOutput class
      
      * change test_forecast_head to test_prediction_head
      
      * style
      
      * fix docs
      
      * fix tests
      
      * change num_labels to num_targets
      
      * Remove PatchTSTTranspose
      
      * remove arguments in PatchTSTMeanScaler
      
      * remove arguments in PatchTSTStdScaler
      
      * add config as an argument to all the scaler classes
      
      * reformat
      
      * Add norm_eps for batchnorm and layernorm
      
      * reformat.
      
      * reformat
      
      * edit docstring
      
      * update docstring
      
      * change variable name pooling to pooling_type
      
      * fix output_hidden_states as tuple
      
      * fix bug when calling PatchTSTBatchNorm
      
      * change stride to patch_stride
      
      * create PatchTSTPositionalEncoding class and restructure the PatchTSTEncoder
      
      * formatting
      
      * initialize scalers with configs
      
      * edit output_hidden_states
      
      * style
      
      * fix forecast_mask_patches doc string
      
      * doc improvements
      
      * move summary to the start
      
      * typo
      
      * fix docstring
      
      * turn off masking when using prediction, regression, classification
      
      * return scaled output
      
      * adjust output when using distribution head
      
      * remove _num_patches function in the config
      
      * get config.num_patches from patchifier init
      
      * add output_attentions docstring, remove tuple in output_hidden_states
      
      * change SamplePatchTSTPredictionOutput and SamplePatchTSTRegressionOutput to SamplePatchTSTOutput
      
      * remove print("model_class: ", model_class)
      
      * change encoder_attention_heads to num_attention_heads
      
      * change norm to norm_layer
      
      * change encoder_layers to num_hidden_layers
      
      * change shared_embedding to share_embedding, shared_projection to share_projection
      
      * add output_attentions
      
      * more robust check of norm_type
      
      * change dropout_path to path_dropout
      
      * edit docstring
      
      * remove positional_encoding function and add _init_pe in PatchTSTPositionalEncoding
      
      * edit shape of cls_token and initialize it
      
      * add a check on the num_input_channels.
      
      * edit head_dim in the Prediction class to allow the use of cls_token
      
      * remove some positional_encoding_type options, remove learn_pe arg, initalize pe
      
      * change Exception to ValueError
      
      * format
      
      * norm_type is "batchnorm"
      
      * make style
      
      * change cls_token shape
      
      * Change forecast_mask_patches to num_mask_patches. Remove forecast_mask_ratios.
      
      * Bring PatchTSTClassificationHead on top of PatchTSTForClassification
      
      * change encoder_ffn_dim to ffn_dim and edit the docstring.
      
      * update variable names to match with the config
      
      * add generation tests
      
      * change num_mask_patches to num_forecast_mask_patches
      
      * Add examples explaining the use of these models
      
      * make style
      
      * Revert "Revert "[time series] Add PatchTST (#25927)" (#27486)"
      
      This reverts commit 78f6ed6c
      
      .
      
      * make style
      
      * fix default std scaler's minimum_scale
      
      * fix docstring
      
      * close code blocks
      
      * Update docs/source/en/model_doc/patchtst.md
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/patchtst/test_modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/configuration_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/patchtst/modeling_patchtst.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix tests
      
      * add add_start_docstrings
      
      * move examples to the forward's docstrings
      
      * update prepare_batch
      
      * update test
      
      * fix test_prediction_head
      
      * fix generation test
      
      * use seed to create generator
      
      * add output_hidden_states and config.num_patches
      
      * add loc and scale args in PatchTSTForPredictionOutput
      
      * edit outputs if if not return_dict
      
      * use self.share_embedding to check instead checking type.
      
      * remove seed
      
      * make style
      
      * seed is an optional int
      
      * fix test
      
      * generator device
      
      * Fix assertTrue test
      
      * swap order of items in outputs when return_dict=False.
      
      * add mask_type and random_mask_ratio to unittest
      
      * Update modeling_patchtst.py
      
      * add add_start_docstrings for regression model
      
      * make style
      
      * update model path
      
      * Edit the ValueError comment in forecast_masking
      
      * update examples
      
      * make style
      
      * fix commented code
      
      * update examples: remove config from from_pretrained call
      
      * Edit example outputs
      
      * Set default target_values to None
      
      * remove config setting in regression example
      
      * Update configuration_patchtst.py
      
      * Update configuration_patchtst.py
      
      * remove config from examples
      
      * change default d_model and ffn_dim
      
      * norm_eps default
      
      * set has_attentions to Trye and define self.seq_length = self.num_patche
      
      * update docstring
      
      * change variable mask_input to do_mask_input
      
      * fix blank space.
      
      * change logger.debug to logger.warning.
      
      * remove unused PATCHTST_INPUTS_DOCSTRING
      
      * remove all_generative_model_classes
      
      * set test_missing_keys=True
      
      * remove undefined params in the docstring.
      
      ---------
      Co-authored-by: default avatarnnguyen <nnguyen@us.ibm.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNam Nguyen <namctin@gmail.com>
      Co-authored-by: default avatarWesley Gifford <79663411+wgifford@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      af8acc47
  34. 28 Nov, 2023 1 commit
  35. 22 Nov, 2023 1 commit
    • dg845's avatar
      Add UnivNet Vocoder Model for Tortoise TTS Diffusers Integration (#24799) · 7f6a804d
      dg845 authored
      * initial commit
      
      * Add inital testing files and modify __init__ files to add UnivNet imports.
      
      * Fix some bugs
      
      * Add checkpoint conversion script and add references to transformers pre-trained model.
      
      * Add UnivNet entries for auto.
      
      * Add initial docs for UnivNet.
      
      * Handle input and output shapes in UnivNetGan.forward and add initial docstrings.
      
      * Write tests and make them pass.
      
      * Write docs.
      
      * Add UnivNet doc to _toctree.yml and improve docs.
      
      * fix typo
      
      * make fixup
      
      * make fix-copies
      
      * Add upsample_rates parameter to config and improve config documentation.
      
      * make fixup
      
      * make fix-copies
      
      * Remove unused upsample_rates config parameter.
      
      * apply suggestions from review
      
      * make style
      
      * Verify and add reason for skipped tests inherited from ModelTesterMixin.
      
      * Add initial UnivNetGan integration tests
      
      * make style
      
      * Remove noise_length input to UnivNetGan and improve integration tests.
      
      * Fix bug and make style
      
      * Make UnivNet integration tests pass
      
      * Add initial code for UnivNetFeatureExtractor.
      
      * make style
      
      * Add initial tests for UnivNetFeatureExtractor.
      
      * make style
      
      * Properly initialize weights for UnivNetGan
      
      * Get feature extractor fast tests passing
      
      * make style
      
      * Get feature extractor integration tests passing
      
      * Get UnivNet integration tests passing
      
      * make style
      
      * Add UnivNetGan usage example
      
      * make style and use feature extractor from hub in integration tests
      
      * Update tips in docs
      
      * apply suggestions from review
      
      * make style
      
      * Calculate padding directly instead of using get_padding methods.
      
      * Update UnivNetFeatureExtractor.to_dict to be UnivNet-specific.
      
      * Update feature extractor to support using model(**inputs) and add the ability to generate noise and pad the end of the spectrogram in __call__.
      
      * Perform padding before generating noise to ensure the shapes are correct.
      
      * Rename UnivNetGan.forward's noise_waveform argument to noise_sequence.
      
      * make style
      
      * Add tests to test generating noise and padding the end for UnivNetFeatureExtractor.__call__.
      
      * Add tests for checking batched vs unbatched inputs for UnivNet feature extractor and model.
      
      * Add expected mean and stddev checks to the integration tests and make them pass.
      
      * make style
      
      * Make it possible to use model(**inputs), where inputs is the output of the feature extractor.
      
      * fix typo in UnivNetGanConfig example
      
      * Calculate spectrogram_zero from other config values.
      
      * apply suggestions from review
      
      * make style
      
      * Refactor UnivNet conversion script to use load_state_dict (following persimmon).
      
      * Rename UnivNetFeatureExtractor to UnivNetGanFeatureExtractor.
      
      * make style
      
      * Switch to using torch.tensor and torch.testing.assert_close for testing expected values/slices.
      
      * make style
      
      * Use config in UnivNetGan modeling blocks.
      
      * make style
      
      * Rename the spectrogram argument of UnivNetGan.forward to input_features, following Whisper.
      
      * make style
      
      * Improving padding documentation.
      
      * Add UnivNet usage example to the docs.
      
      * apply suggestions from review
      
      * Move dynamic_range_compression computation into the mel_spectrogram method of the feature extractor.
      
      * Improve UnivNetGan.forward return docstring.
      
      * Update table in docs/source/en/index.md.
      
      * make fix-copies
      
      * Rename UnivNet components to have pattern UnivNet*.
      
      * make style
      
      * make fix-copies
      
      * Update docs
      
      * make style
      
      * Increase tolerance on flaky unbatched integration test.
      
      * Remove torch.no_grad decorators from UnivNet integration tests to try to avoid flax/Tensorflow test errors.
      
      * Add padding_mask argument to UnivNetModel.forward and add batch_decode feature extractor method to remove padding.
      
      * Update documentation and clean up padding code.
      
      * make style
      
      * make style
      
      * Remove torch dependency from UnivNetFeatureExtractor.
      
      * make style
      
      * Fix UnivNetModel usage example
      
      * Clean up feature extractor code/docstrings.
      
      * apply suggestions from review
      
      * make style
      
      * Add comments for tests skipped via ModelTesterMixin flags.
      
      * Add comment for model parallel tests skipped via the test_model_parallel ModelTesterMixin flag.
      
      * Add # Copied from statements to copied UnivNetFeatureExtractionTest tests.
      
      * Simplify UnivNetFeatureExtractorTest.test_batch_decode.
      
      * Add support for unbatched padding_masks in UnivNetModel.forward.
      
      * Refactor unbatched padding_mask support.
      
      * make style
      7f6a804d
  36. 21 Nov, 2023 1 commit
    • jiqing-feng's avatar
      TVP model (#25856) · c770600f
      jiqing-feng authored
      * tvp model for video grounding
      
      add tokenizer auto
      
      fix param in TVPProcessor
      
      add docs
      
      clear comments and enable different torch dtype
      
      add image processor test and model test and fix code style
      
      * fix conflict
      
      * fix model doc
      
      * fix image processing tests
      
      * fix tvp tests
      
      * remove torch in processor
      
      * fix grammar error
      
      * add more details on tvp.md
      
      * fix model arch for loss, grammar, and processor
      
      * add docstring and do not regard TvpTransformer, TvpVisionModel as individual model
      
      * use pad_image
      
      * update copyright
      
      * control first downsample stride
      
      * reduce first only works for ResNetBottleNeckLayer
      
      * fix param name
      
      * fix style
      
      * add testing
      
      * fix style
      
      * rm init_weight
      
      * fix style
      
      * add post init
      
      * fix comments
      
      * do not test TvpTransformer
      
      * fix warning
      
      * fix style
      
      * fix example
      
      * fix config map
      
      * add link in config
      
      * fix comments
      
      * fix style
      
      * rm useless param
      
      * change attention
      
      * change test
      
      * add notes
      
      * fix comments
      
      * fix tvp
      
      * import checkpointing
      
      * fix gradient checkpointing
      
      * Use a more accurate example in readme
      
      * update
      
      * fix copy
      
      * fix style
      
      * update readme
      
      * delete print
      
      * remove tvp test_forward_signature
      
      * remove TvpTransformer
      
      * fix test init model
      
      * merge main and make style
      
      * fix tests and others
      
      * fix image processor
      
      * fix style and model_input_names
      
      * fix tests
      c770600f