1. 11 Jul, 2023 1 commit
    • Matt's avatar
      Falcon port (#24523) · b3ab3fac
      Matt authored
      
      
      * Initial commit
      
      * Update src/transformers/models/falcon/configuration_falcon.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/configuration_falcon.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Cleanup config docstring
      
      * Update src/transformers/models/falcon/configuration_falcon.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Convert to relative imports
      
      * Remove torch < 1.8 warning
      
      * Restructure cos_sin header
      
      * qkv -> query, key, value
      
      * Refactor attention calculation
      
      * Add a couple of config variables to account for the different checkpoints
      
      * Successful merging of the code paths!
      
      * Fix misplaced line in the non-parallel attention path
      
      * Update config and tests
      
      * Add a pad_token_id when testing
      
      * Support output_attentions when alibi is None
      
      * make fixup
      
      * Skip KV cache shape test
      
      * No more _keys_to_ignore_on_load_missing
      
      * Simplify self attention a bit
      
      * Simplify self attention a bit
      
      * make fixup
      
      * stash commit
      
      * Some more attention mask updates
      
      * Should pass all tests except assisted generation!
      
      * Add big model generation test
      
      * make fixup
      
      * Add temporary workaround for test
      
      * Test overrides for assisted generation
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update tests/models/falcon/test_modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Test overrides for assisted generation
      
      * Add generation demo
      
      * Update copyright
      
      * Make the docstring model actually small
      
      * Add module-level docstring
      
      * Remove all assertions
      
      * Add copied from bloom
      
      * Reformat the QKV layer
      
      * Add copied from bloom
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Remove unused line and reformat
      
      * No single letter variables
      
      * Cleanup return names
      
      * Add copied from line
      
      * Remove the deprecated arguments blocks
      
      * Change the embeddings test to an alibi on/off test
      
      * Remove position_ids from FalconForQA
      
      * Remove old check for token type IDs
      
      * Fix the alibi path when multi_query is False
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/falcon/modeling_falcon.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/falcon/test_modeling_falcon.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update config naming
      
      * Fix typo for new_decoder_architecture
      
      * Add some comments
      
      * Fix docstring
      
      * Fix docstring
      
      * Create range in the right dtype from the start
      
      * Review comment cleanup
      
      * n_head_kv -> num_kv_heads
      
      * self.alibi -> self.use_alibi
      
      * self.num_kv -> self.num_kv_heads
      
      * Reorder config args
      
      * Made alibi arguments Optional
      
      * Add all model docstrings
      
      * Add extra checkpoints
      
      * Add author info for Falcon
      
      * Stop removing token_type_ids because our checkpoints shouldn't return it anymore
      
      * Add one hopeful comment for the future
      
      * Fix typo
      
      * Update tests, fix cache issue for generation
      
      * Use -1e9 instead of -inf to avoid float overflow
      
      * Recompute the rotary embeddings much less often
      
      * Re-enable disabled tests
      
      * One final fix to attention mask calculation, and update tests
      
      * Cleanup targeting falcon-40b equivalency
      
      * Post-rebase docs update
      
      * Update docstrings, especially in the config
      
      * More descriptive variable names, and comments where we can't rename them
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      b3ab3fac
  2. 10 Jul, 2023 1 commit
  3. 07 Jul, 2023 1 commit
  4. 03 Jul, 2023 1 commit
    • Arthur's avatar
      [`Umt5`] Add google's umt5 to `transformers` (#24477) · 799df10a
      Arthur authored
      
      
      * add tokenization template
      
      * update conversion script
      
      * update modeling code
      
      * update
      
      * update convert checkpoint
      
      * update modeling
      
      * revert changes on convert script
      
      * new conversion script for new format
      
      * correct position bias
      
      * cleaning a bit
      
      * Credit co authors
      Co-authored-by: default avataragemagician <ahmed.elnaggar@tum.de>
      
      Co-authored-by: stefan-it
      <>
      
      * styling
      
      * Add docq
      
      * fix copies
      
      * add co author
      
      * Other Author
      
      * Merge branch 'main' of https://github.com/huggingface/transformers
      
       into add-umt5
      
      * add testing
      
      * nit
      
      * Update docs/source/en/model_doc/umt5.mdx
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      
      * fix t5
      
      * actual fix?
      
      * revert wrong changes
      
      * remove
      
      * update test
      
      * more fixes
      
      * revert some changes
      
      * add SPIECE_UNDERLINE
      
      * add a commone xample
      
      * upfate
      
      * fix copies
      
      * revert changes on t5 conversion script
      
      * revert bytefallback changes since there was no addition yet
      
      * fixup
      
      * fixup
      
      * ingore umt5 cutom testing folder
      
      * fix readmes
      
      * revertT5 changes
      
      * same outputs
      
      * fixup
      
      * update example
      
      * Apply suggestions from code review
      
      * style
      
      * draft addition of all new files
      
      * current update
      
      * fix attention and stuff
      
      * finish refactoring
      
      * auto config
      
      * fixup
      
      * more nits
      
      * add umt5 to init
      
      * use md format
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * revert changes on mt5
      
      * revert mt4 changes
      
      * update test
      
      * more fixes
      
      * add to mapping
      
      * fix-copies
      
      * fix copies
      
      * foix retain grad
      
      * fix some tests
      
      * nits
      
      * done
      
      * Update src/transformers/models/umt5/modeling_umt5.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/umt5.md
      
      * Update src/transformers/models/umt5/__init__.py
      
      * Update docs/source/en/model_doc/umt5.md
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      
      * Update src/transformers/models/umt5/modeling_umt5.py
      
      * update conversion script + use google checkpoints
      
      * nits
      
      * update test and modelling
      
      * stash slow convert
      
      * update fixupd
      
      * don't change slow
      
      ---------
      
      Co-authored-by: stefan-it <>
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      799df10a
  5. 29 Jun, 2023 1 commit
    • Sanchit Gandhi's avatar
      Add Musicgen (#24109) · 1c1c9075
      Sanchit Gandhi authored
      
      
      * Add Audiocraft
      
      * add cross attention
      
      * style
      
      * add for lm
      
      * convert and verify
      
      * introduce t5
      
      * split configs
      
      * load t5 + lm
      
      * clean conversion
      
      * copy from t5
      
      * style
      
      * start pattern provider
      
      * make generation work
      
      * style
      
      * fix pos embs
      
      * propagate shape changes
      
      * propagate shape changes
      
      * style
      
      * delay pattern: pad tokens at end
      
      * audiocraft -> musicgen
      
      * fix inits
      
      * add mdx
      
      * style
      
      * fix pad token in processor
      
      * override generate and add todos
      
      * add init to test
      
      * undo pattern delay mask after gen
      
      * remove cfg logits processor
      
      * remove cfg logits processor
      
      * remove logits processor in favour of mask
      
      * clean pos embs
      
      * make fix copies
      
      * update readmes
      
      * clean pos emb
      
      * refactor encoder/decoder
      
      * make fix copies
      
      * update conversion
      
      * fix config imports
      
      * update config docs
      
      * make style
      
      * send pattern mask to device
      
      * pattern mask with delay
      
      * recover prompted audio tokens
      
      * fix docstrings
      
      * laydown test file
      
      * pattern edge case
      
      * remove t5 ref
      
      * add processing class
      
      * config refactor
      
      * better pattern comment
      
      * check if mask is not present
      
      * check if mask is not present
      
      * refactor to auto class
      
      * remove encoder configs
      
      * fix processor
      
      * processor import
      
      * start updating conversion
      
      * start updating tests
      
      * make style
      
      * convert t5, encodec, lm
      
      * convert as composite
      
      * also convert processor
      
      * run generate
      
      * classifier free gen
      
      * comments and clean up
      
      * make style
      
      * docs for logit proc
      
      * docstring for uncond gen
      
      * start lm tests
      
      * work tests
      
      * let the lm generate
      
      * refactor: reshape inside forward
      
      * undo greedy loop changes
      
      * from_enc_dec -> from_sub_model
      
      * fix input id shapes in docstrings
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * undo generate changes
      
      * from sub model config
      
      * Update src/transformers/models/musicgen/modeling_musicgen.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make generate work again
      
      * generate uncond -> get uncond inputs
      
      * remove prefix allowed tokens fn
      
      * better error message
      
      * logit proc checks
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * make decoder only tests work
      
      * composite fast tests
      
      * make style
      
      * uncond generation
      
      * feat extr padding
      
      * make audio prompt work
      
      * fix inputs docstrings
      
      * unconditional inputs: dict -> model output
      
      * clean up tests
      
      * more clean up tests
      
      * make style
      
      * t5 encoder -> auto text encoder
      
      * remove comments
      
      * deal with frames
      
      * fix auto text
      
      * slow tests
      
      * nice mdx
      
      * remove can generate
      
      * todo - hub id
      
      * convert m/l
      
      * make fix copies
      
      * only import generation with torch
      
      * ignore decoder from tests
      
      * don't wrap uncond inputs
      
      * make style
      
      * cleaner uncond inputs
      
      * add example to musicgen forward
      
      * fix docs
      
      * ignore MusicGen Model/ForConditionalGeneration in auto mapping
      
      * add doc section to toctree
      
      * add to doc tests
      
      * add processor tests
      
      * fix push to hub in conversion
      
      * tips for decoder only loading
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix conversion for s / m / l checkpoints
      
      * import stopping criteria from module
      
      * remove from pipeline tests
      
      * fix uncond docstring
      
      * decode audio method
      
      * fix docs
      
      * org: sanchit-gandhi -> facebook
      
      * fix max pos embeddings
      
      * remove auto doc (not compatible with shapes)
      
      * bump max pos emb
      
      * make style
      
      * fix doc
      
      * fix config doc
      
      * fix config doc
      
      * ignore musicgen config from docstring
      
      * make style
      
      * fix config
      
      * fix config for doctest
      
      * consistent from_sub_models
      
      * don't automap decoder
      
      * fix mdx save audio file
      
      * fix mdx save audio file
      
      * processor batch decode for audio
      
      * remove keys to ignore
      
      * update doc md
      
      * update generation config
      
      * allow changes for default generation config
      
      * update tests
      
      * make style
      
      * fix docstring for uncond
      
      * fix processor test
      
      * fix processor test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      1c1c9075
  6. 26 Jun, 2023 1 commit
  7. 20 Jun, 2023 1 commit
  8. 14 Jun, 2023 1 commit
    • Matthijs Hollemans's avatar
      [WIP] add EnCodec model (#23655) · 0c3fdccf
      Matthijs Hollemans authored
      
      
      * boilerplate stuff
      
      * messing around with the feature extractor
      
      * fix feature extractor
      
      * unit tests for feature extractor
      
      * rename speech to audio
      
      * quick-and-dirty import of Meta's code
      
      * import weights (sort of)
      
      * cleaning up
      
      * more cleaning up
      
      * move encoder/decoder args into config
      
      * cleanup model
      
      * rename EnCodec -> Encodec
      
      * RVQ parameters in config
      
      * add slow test
      
      * add lstm init and test_init
      
      * Add save & load
      
      * finish EncodecModel
      
      * remove decoder_input_values as they are ont used anywhere (not removed from doc yet)
      
      * fix test feature extraction model name
      
      * Add better slow test
      
      * Fix tests
      
      * some fixup and cleaning
      
      * Improve further
      
      * cleaning up quantizer
      
      * fix up conversion script
      
      * test don't pass, _encode_fram does not work
      
      * update tests with output per encode and decode
      
      * more cleanup
      
      * rename _codebook
      
      * remove old config cruft
      
      * ratios & hop_length
      
      * use ModuleList instead of Sequential
      
      * clean up resnet block
      
      * update types
      
      * update tests
      
      * fixup
      
      * quick cleanup
      
      * fix padding
      
      * more styl,ing
      
      * add patrick feedback
      
      * fix copies
      
      * fixup
      
      * fix lstm
      
      * fix shape issues
      
      * fixup
      
      * rename conv layers
      
      * fixup
      
      * fix decoding
      
      * small conv refactoring
      
      * remove norm_params
      
      * simplify conv layers
      
      * rename conv layers
      
      * stuff
      
      * Clean up
      
      * Add padding logic
      
      use padding mask
      
      small conv refactoring
      
      remove norm_params
      
      simplify conv layers
      
      rename conv layers
      
      stuff
      
      add batched test
      
      update
      
      Clean up
      
      merge and update for padding
      
      fix padding
      
      fixup
      
      * clean up more
      
      * clean up more
      
      * More clean ups
      
      * cleanup convolutions
      
      * typo
      
      * fix typos
      
      * fixup
      
      * build PR doc?
      
      * start refactoring docstring
      
      * fix don't pad when no strid and chunk
      
      * update docstring
      
      * update docstring
      
      * nits
      
      * update going to lunch
      
      * update config and model
      
      * fix broken testse (becaue of the config changes)
      
      * fix scale computation
      
      * fixu[
      
      * only return dict if speciefied or if config returns it
      
      * remove todos
      
      * update defaults in config
      
      * update conversion script
      
      * fix doctest
      
      * more docstring + fixup
      
      * nits on batched_tests
      
      * more nits
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * update basxed on review
      
      * fix update
      
      * updaet tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fixup
      
      * add overlap and chunl_length_s
      
      * cleanup feature extraction
      
      * teste edge cases truncation and padding
      
      * correct processor values
      
      * update config encodec, nits
      
      * fix tests
      
      * fixup
      
      * fix 24Hz test
      
      * elle tests are green
      
      * fix fixup
      
      * Apply suggestions from code review
      
      * revert readme changes
      
      * fixup
      
      * add example
      
      * use facebook checkpoints
      
      * fix typo
      
      * no pipeline tests
      
      * use slef.pad everywhere we can
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * update based on review
      
      * update
      
      * update mdx
      
      * fix bug and tests
      
      * fixup
      
      * fix doctest
      
      * remove comment
      
      * more nits
      
      * add more coverage for `test_truncation_and_padding`
      
      * fixup
      
      * add last test
      
      * fix text
      
      * nits
      
      * Update tests/models/encodec/test_modeling_encodec.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * take care of the last comments
      
      * typo
      
      * fix test
      
      * nits
      
      * fixup
      
      * Update src/transformers/models/encodec/feature_extraction_encodec.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatararthur.zucker@gmail.com <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      0c3fdccf
  9. 06 Jun, 2023 1 commit
    • amyeroberts's avatar
      Add TimmBackbone model (#22619) · a717e031
      amyeroberts authored
      
      
      * Add test_backbone for convnext
      
      * Add TimmBackbone model
      
      * Add check for backbone type
      
      * Tidying up - config checks
      
      * Update convnextv2
      
      * Tidy up
      
      * Fix indices & clearer comment
      
      * Exceptions for config checks
      
      * Correclty update config for tests
      
      * Safer imports
      
      * Safer safer imports
      
      * Fix where decorators go
      
      * Update import logic and backbone tests
      
      * More import fixes
      
      * Fixup
      
      * Only import all_models if torch available
      
      * Fix kwarg updates in from_pretrained & main rebase
      
      * Tidy up
      
      * Add tests for AutoBackbone
      
      * Tidy up
      
      * Fix import error
      
      * Fix up
      
      * Install nattan in doc_test_job
      
      * Revert back to setting self._out_xxx directly
      
      * Bug fix - out_indices mapping from out_features
      
      * Fix tests
      
      * Dont accept output_loading_info for Timm models
      
      * Set out_xxx and don't remap
      
      * Use smaller checkpoint for test
      
      * Don't remap timm indices - check out_indices based on stage names
      
      * Skip test as it's n/a
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Cleaner imports / spelling is hard
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      a717e031
  10. 02 Jun, 2023 2 commits
  11. 31 May, 2023 1 commit
    • Denisa Roberts's avatar
      Add TensorFlow implementation of EfficientFormer (#22620) · 88f50a1e
      Denisa Roberts authored
      * Add tf code for efficientformer
      
      * Fix return dict bug - return last hidden state after last stage
      
      * Fix corresponding return dict bug
      
      * Override test tol
      
      * Change default values of training to False
      
      * Set training to default False X3
      
      * Rm axis from ln
      
      * Set init in dense projection
      
      * Rm debug stuff
      
      * Make style; all tests pass.
      
      * Modify year to 2023
      
      * Fix attention biases codes
      
      * Update the shape list logic
      
      * Add a batch norm eps config
      
      * Remove extract comments in test files
      
      * Add conditional attn and hidden states return for serving output
      
      * Change channel dim checking logic
      
      * Add exception for withteacher model in training mode
      
      * Revert layer count for now
      
      * Add layer count for conditional layer naming
      
      * Transpose for conv happens only in main layer
      
      * Make tests smaller
      
      * Make style
      
      * Update doc
      
      * Rm from_pt
      
      * Change to actual expect image class label
      
      * Remove stray print in tests
      
      * Update image processor test
      
      * Remove the old serving output logic
      
      * Make style
      
      * Make style
      
      * Complete test
      88f50a1e
  12. 30 May, 2023 1 commit
  13. 19 May, 2023 2 commits
    • Matt's avatar
      TF port of the Segment Anything Model (SAM) (#22970) · 1c460a52
      Matt authored
      
      
      * First commit
      
      * Add auto-translation with GPT-4
      
      * make fixup
      
      * Add a functional layernorm for TF
      
      * Add all the auxiliary imports etc.
      
      * Add the extra processor and tests
      
      * rebase to main
      
      * Add all the needed fixes to the GPT code
      
      * make fixup
      
      * Make convolutions channels-last so they run on CPU
      
      * make fixup
      
      * Fix final issues
      
      * Fix other models affected by test change
      
      * Clarify comment on the sparse_prompt_embeddings check
      
      * Refactor functional_layernorm, use shape_list in place of .shape in some places
      
      * Remove deprecated torch-alike code
      
      * Update tests/models/sam/test_modeling_tf_sam.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update tests/models/sam/test_modeling_tf_sam.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Refactor processor with common methods and separated private methods
      
      * make fixup
      
      * Quietly delete the file that didn't do anything (sorry Sylvain)
      
      * Refactor the processor tests into one file
      
      * make fixup
      
      * Clean up some unnecessary indirection
      
      * Fix TF mask postprocessing
      
      * Add more processor equivalence tests
      
      * Refactor generate_crop_boxes to use framework-neutral np code
      
      * Make the serving output correctly conditional
      
      * Fix error message line length
      
      * Use dict keys rather than indices internally in both TF and PT SAM call/forward
      
      * Return dicts internally in the call/forward methods
      
      * Revert changes to common tests and just override check_pt_tf_outputs
      
      * Revert changes to other model tests
      
      * Clarify comments for functional layernorm
      
      * Add missing transpose from PT code
      
      * Removed unused copied from in PT code
      
      * Remove overrides for tests that don't exist in TF
      
      * Fix transpose and update tests for PT and TF to check pred_masks
      
      * Add training flag
      
      * Update tests to use TF checkpoints
      
      * Update index.mdx
      
      * Add missing cross-test decorator
      
      * Remove optional extra asterisks
      
      * Revert return_dict changes in PT code
      
      * Update src/transformers/models/sam/modeling_tf_sam.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Remove None return annotations on init methods
      
      * Update tests/models/sam/test_processor_sam.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fix input_boxes shapes
      
      * make fixup
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      1c460a52
    • Julien Chaumond's avatar
      README: Fix affiliation for MEGA (#23394) · 3cf01b20
      Julien Chaumond authored
      
      
      * README: Fix affiliation for MEGA
      
      * Fix quality
      
      ---------
      Co-authored-by: default avatarLysandre <lysandre@huggingface.co>
      3cf01b20
  14. 12 May, 2023 1 commit
  15. 09 May, 2023 1 commit
    • Sylvain Gugger's avatar
      Add RWKV-4 (#22797) · b4d4d6fe
      Sylvain Gugger authored
      
      
      * First draft of RWKV-4
      
      * Add support for generate
      
      * Style post-rebase
      
      * Properly use state
      
      * Write doc
      
      * Fix doc
      
      * More math
      
      * Add model to README, dummies and clean config
      
      * Fix init
      
      * multiple fixes:
      
      - fix common tests
      - fix configuraion default values
      - add CI test for checking state computation
      - fix some CI tests
      
      * correct tokenizer
      
      * some tweaks
      
      - fix config docstring
      - fix failing tests
      
      * fix CI tests
      
      - add output_attention / output_hidden_states
      - override test_initialization
      - fix failing CIs
      
      * fix conversion script
      
      - fix sharded case
      - add new arguments
      
      * add slow tests + more fixes on conversion script
      
      * add another test
      
      * final fixes
      
      * change single name variable
      
      * add mock attention mask for pipeline to work
      
      * correct eos token id
      
      * fix nits
      
      * add checkpoints
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add `tie_word_embeddings` in docstring
      
      * change tensor name
      
      * fix final nits
      
      * Trigger CI
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      b4d4d6fe
  16. 28 Apr, 2023 1 commit
  17. 27 Apr, 2023 1 commit
  18. 23 Apr, 2023 1 commit
  19. 20 Apr, 2023 1 commit
  20. 19 Apr, 2023 1 commit
    • Arthur's avatar
      Add Segment Anything Model (SAM) (#22654) · 474bf508
      Arthur authored
      
      
      * initial commit
      
      * keys match
      
      * update, fix conversion
      
      * fixes, inference working
      
      * fix
      
      * more fixes
      
      * more fixes
      
      * clean up
      
      * more clean up
      
      * fix copies and add convext copied layer norm
      
      * stash
      
      * pretty big upfate
      
      * cleaning
      
      * more cleaning
      
      * fixup stuffs
      
      * fix copies
      
      * fix iinit
      
      * update test removing tokenizer
      
      * nits
      
      * add pretrained
      
      * more nits
      
      * remove tracking of pipeline
      
      * few fixes
      
      * update san and conversion script
      
      * fix mask decoder and prompt encoder conversion
      
      * fixes
      
      * small update
      
      * fix order
      
      * fix
      
      * fix image embeddings
      
      * nites
      
      * few fixes
      
      * fix logits
      
      * clean up
      
      * fixes boxes inference
      
      * v1 AMG
      
      * clean up
      
      * some clean up
      
      * multi points support
      
      * amg working
      
      * fixup
      
      * clean up
      
      * readme
      
      * update toctree
      
      * fix type hint
      
      * multiple fixes
      
      * fixup
      
      * fixes
      
      * updates
      
      * updates
      
      * more tests
      
      * few fixes
      
      * change to `SamForMaskGeneration`
      
      * doc
      
      * fixup
      
      * fix more tests
      
      * multiple fixes
      
      * fix CI tests
      
      * refactor processor
      
      * renamings
      
      * draft the pipeline
      
      * refactor
      
      * fix tests
      
      * fix test
      
      * few cleanings
      
      * fix test
      
      * edit pipelien support chunking
      
      * udate
      
      * add slow tests
      
      * fix nit
      
      * fixup
      
      * fix nit
      
      * current chunk pipleine
      
      * cast boxes in fp32
      
      * nit
      
      * current updates
      
      * piepleine works
      
      * fixup
      
      * clean up config
      
      * fix slow tests
      
      * fix slow tests
      
      * clean up
      
      * update doc and pipeline
      
      * adds more slow tests
      
      * fix slow tests
      
      * cleaning
      
      * tests pass
      
      * add docstring
      
      * fix copies
      
      * clean up
      
      * support batch of images
      
      * style
      
      * dummy is needed, add tests
      
      * fix slow tests
      
      * fix CI
      
      * update
      
      * adds more tests
      
      * fixes
      
      * fixes
      
      * fixup
      
      * fixes
      
      * few fixes
      
      * filter
      
      * few fixes
      
      * some refactor
      
      * touches finales
      
      * fix
      
      * style
      
      * remove pipeline files
      
      * fixes nits
      
      * revert pipeline changes
      
      * fix test
      
      * fixup
      
      * remove automodel for automatic mask generation
      
      * fix failing torch tests
      
      * update mdx
      
      * revert removal of `MODEL_FOR_AUTOMATIC_MASK_GENERATION_MAPPING`
      
      * update sam config based on review
      Co-authored-by: default avataramyeroberts <aeroberts4444@gmail.com>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      
      * update low_resolution_masks -> pred_masks
      inti ln with layer_norm_eps
      add_decomposed_rel_pos doc
      forward doc of SamForMaskGeneration
      
      * update processor docstring
      
      * remove image processor import empty
      
      * update for testing
      
      * output vision hidden states + clean recomm
      also test all iou values
      
      * fixup
      
      * fixup
      
      * remove unused
      
      * Update src/transformers/models/sam/modeling_sam.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/sam/image_processing_sam.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * nits
      
      * fix
      
      * fix CI tests and slow tests
      
      * replace with Amy's processor
      
      * clearer docstring
      
      * add `SamVisionNeck`
      
      * refactor - all CI tests should pass
      
      * fix broken import on Gcolab
      
      * few fixes here and there
      
      * fix another bug
      
      * fix more bugs
      
      * update and merge
      
      * correct ckpt
      
      * address comments
      
      * add tips
      
      * revert
      
      * fix docstring
      
      * replace with `SamModel`
      
      * make fixup
      
      * add support for bathed images and batch ed points
      
      * make fixup this time, really
      
      * make fixup again and again
      
      * few fixes here and there, this should be the touche finale
      
      * Update docs/source/en/model_doc/sam.mdx
      
      * fixup
      
      * correct checkpoints
      
      * correct name
      
      * rm unneeded file
      
      * add notebook
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avataramyeroberts <aeroberts4444@gmail.com>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      474bf508
  21. 12 Apr, 2023 1 commit
    • pioliverse's avatar
      add model resources for CPMAnt (new) (#20906) · 523ca4e0
      pioliverse authored
      
      
      * resolve conflicts
      
      * rebase and make style
      
      * test
      
      * test
      
      * test
      
      * rebase and make style
      
      * rebase and make style
      
      * tests
      
      * tests
      
      * rewrite some functions
      
      * rebase and make style
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * fix some bugs & docstring
      
      * add models and tests
      
      * solve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * tests
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * fix some bugs & docstring
      
      * save resolution
      
      * make style
      
      * delete redefinition code
      
      * reformat function
      
      * reformat
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * tests
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * resolve conflicts
      
      * make style
      
      * fix bugs and refactor
      
      * modify docstrings and make style
      
      * unify import format in __init__.py
      
      * fix import-altclp bug
      
      * fix copies to update index.md
      
      * fix unused config parameters
      
      * fix unused config parameters
      
      * fix unused config parameters
      
      * update README_ja.md
      
      * dummy commit for unit test
      
      * fix attention mask
      
      * add CPMAntTokenizer&-Fast to auto-mapping
      
      * drop redundant changes in README_ko
      
      * fix  defaults in docstring
      
      * fix use_cache and some docstring
      
      * add missing args in tokenizer
      
      * modify tester inheritance
      
      * add is_jieba_available
      
      * fix some bugs
      
      * make style and fix-copies
      
      * add doctests
      
      * skip integration tests
      
      * add is_jieba_available
      
      * fix bugs in common tests
      
      * adjust docstrings and make style
      
      * add argument docstring
      
      * adjust code to some specifications
      
      * make style and fix-copies
      
      * add fast tokenization test
      
      * dummy commit for unit test
      
      * dummy commit for unit test
      
      * dummy commit for unit test
      
      * normalize some comments and names
      
      * Bert->CPMAnt
      
      * camel names and drop redundant codes
      
      * make style and fix-coies
      
      * add CpmTokenizerFast _import_structure
      
      * drop cpmanttokenizerfast in model_doc
      
      * fix some problems
      
      * fix CPMAnt tokenization for common test
      
      * make style and fixup
      
      * fix copies and fixup
      
      * fix bugs in tokenization test
      
      * dummy commit for connection failure in unittest
      
      * fix copies
      
      * drop trailing comma
      
      * fix decorator in tests
      
      * dummy commit for connection failure in unittest
      
      ---------
      Co-authored-by: default avatarGong Baitao <gongbaitao11@gmail.com>
      523ca4e0
  22. 10 Apr, 2023 1 commit
    • Joel Lamy-Poirier's avatar
      Add GPTBigCode model (Optimized GPT2 with MQA from Santacoder & BigCode) (#22575) · e0921c6b
      Joel Lamy-Poirier authored
      
      
      * Add model with cli tool
      
      * Remove unwanted stuff
      
      * Add new code
      
      * Remove inference runner
      
      * Style
      
      * Fix checks
      
      * Test updates
      
      * make fixup
      
      * fix docs
      
      * fix doc
      
      * fix test
      
      * hopefully fix pipeline tests
      
      * refactor
      
      * fix CIs
      
      * add comment
      
      * rename to `GPTBigCodeForCausalLM`
      
      * correct readme
      
      * make fixup + docs
      
      * make fixup
      
      * fixes
      
      * fixes
      
      * Remove pruning
      
      * Remove import
      
      * Doc updates
      
      * More pruning removal
      
      * Combine copies
      
      * Single MQA implementation, remove kv cache pre-allocation and padding
      
      * Update doc
      
      * Revert refactor to match gpt2 style
      
      * Merge back key and value caches, fix some type hints
      
      * Update doc
      
      * Fix position ids pith padding (PR 21080)
      
      * Add conversion script temporarily
      
      * Update conversion script
      
      * Remove checkpoint conversion
      
      * New model
      
      * Fix MQA test
      
      * Fix copies
      
      * try fix tests
      
      * FIX TEST!!
      
      * remove  `DoubleHeadsModel`
      
      * add MQA tests
      
      * add slow tests
      
      * clean up
      
      * add CPU checker
      
      * final fixes
      
      * fixes
      
      - fix GPU issue
      - fixed slow tests
      - skip disk offload
      
      * fix final issue
      
      * Simplify and comment baddbmm fix
      
      * Remove unnecessary code
      
      * Transpose tweaks
      
      * Use beta=1 on cpu, improve tests
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      e0921c6b
  23. 06 Apr, 2023 1 commit
    • Nicolas Patry's avatar
      Adding Llama FastTokenizer support. (#22264) · 1670be4b
      Nicolas Patry authored
      * Adding Llama FastTokenizer support.
      
      - Requires https://github.com/huggingface/tokenizers/pull/1183 version
      - Only support byte_fallback for llama, raise otherwise (safety net).
      - Lots of questions are special tokens
      
      How to test:
      
      ```python
      
      from transformers.convert_slow_tokenizer import convert_slow_tokenizer
      from transformers import AutoTokenizer
      from tokenizers import Tokenizer
      
      tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-7b")
      
      if False:
          new_tokenizer = Tokenizer.from_file("tok.json")
      else:
          new_tokenizer = convert_slow_tokenizer(tokenizer)
          new_tokenizer.save("tok.json")
      
      strings = [
          "This is a test",
          "生活的真谛是",
          "生活的真谛是[MASK]。",
          # XXX: This one is problematic because of special tokens
          # "<s> Something something",
      ]
      
      for string in strings:
          encoded = tokenizer(string)["input_ids"]
          encoded2 = new_tokenizer.encode(string).ids
      
          assert encoded == encoded2, f"{encoded} != {encoded2}"
      
          decoded = tokenizer.decode(encoded)
          decoded2 = new_tokenizer.decode(encoded2)
      
          assert decoded.strip() == decoded2, f"{repr(decoded)} != {repr(decoded2)}"
      ```
      
      The converter + some test script.
      
      The test script.
      
      Tmp save.
      
      Adding Fast tokenizer + tests.
      
      Adding the tokenization tests.
      
      Correct combination.
      
      Small fix.
      
      Fixing tests.
      
      Fixing with latest update.
      
      Rebased.
      
      fix copies + normalized added tokens  + copies.
      
      Adding doc.
      
      TMP.
      
      Doc + split files.
      
      Doc.
      
      Versions + try import.
      
      Fix Camembert + warnings -> Error.
      
      Fix by ArthurZucker.
      
      Not a decorator.
      
      * Fixing comments.
      
      * Adding more to docstring.
      
      * Doc rewriting.
      1670be4b
  24. 05 Apr, 2023 1 commit
  25. 04 Apr, 2023 2 commits
  26. 27 Mar, 2023 1 commit
    • Arthur's avatar
      [WIP]`NLLB-MoE` Adds the moe model (#22024) · 19ade242
      Arthur authored
      * Initial commit
      
      * update modeling code
      
      * update doc
      
      * add functions necessary
      
      * fix impotrs
      
      * revert changes
      
      * fixup
      
      * more styling to get going
      
      * remove standalone encoder
      
      * update code
      
      * styling
      
      * fix config and model
      
      * update code and some refactoring
      
      * make more tests pass
      
      * Adding NLLB-200 - MoE - 54.5B for no language left behind
      Fixes #21300
      
      * fix mor common tests
      
      * styke
      
      * update testing file
      
      * update
      
      * update
      
      * Router2 doc
      
      * update check config with sparse layer
      
      * add dummy router
      
      * update current conversion script
      
      * create on the fly conversion script
      
      * Fixup
      
      * style
      
      * style 2
      
      * fix empty return
      
      * fix return
      
      * Update default config sparse layers
      
      * easier to create sparse layers
      
      * update
      
      * update conversion script
      
      * update modeling
      
      * add to toctree
      
      * styling
      
      * make ruff happy
      
      * update docstring
      
      * update conversion script
      
      * update, will break tests but impelemting top2
      
      * update
      
      * local groups are supported here
      
      * ️ Support for local groups is now removed ️
      
      This is because it has to work with model parallelism that we do not support
      
      * finish simplificaiton
      
      * Fix forward
      
      * style
      
      * fixup
      
      * Update modelling and test, refactoring
      
      * update tests
      
      * remove final layer)norm as it is done in the FF
      
      * routing works! Logits test added
      
      * nit in test
      
      * remove top1router
      
      * style
      
      * make sure sparse are tested. Had to change route_tokens a liottle bit
      
      * add support for unslip models when converting
      
      * fixup
      
      * style
      
      * update test s
      
      * update test
      
      * REFACTOR
      
      * encoder outputs match!
      
      * style
      
      * update testing
      
      * 🎉encoder and decoder logits match 🎉
      
      
      
      * styleing
      
      * update tests
      
      * cleanup tests
      
      * fix router test and CIs
      
      * cleanup
      
      * cleanup test styling
      
      * fix tests
      
      * Finally the generation tests match!
      
      * cleanup
      
      * update test
      
      * style testing file
      
      * remove script
      
      * cleanup
      
      * more cleanup
      
      * nits
      
      * update
      
      * NLLB tokenizer is wrong and will be fixed soon
      
      * use LongTensors
      
      * update tests
      
      * revert some small changes
      
      * fix second expert sampling and batch prioritized routing
      
      * update tests
      
      * finish last tests
      
      * make ruff happy
      
      * update
      
      * ruff again
      
      * style
      
      * Update docs/source/en/model_doc/nllb-moe.mdx
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Updates based on review
      
      * style and fix import issue
      
      * nit
      
      * more nits
      
      * cleanup
      
      * styling
      
      * update test_seconde_expert_policy
      
      * fix name
      
      * last nit on the markdown examples
      
      ---------
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      19ade242
  27. 24 Mar, 2023 2 commits
    • Shubhamai's avatar
      Resnet flax (#21472) · a0cbbba3
      Shubhamai authored
      
      
      * [WIP] flax resnet
      
      * added pretrained flax models, results reproducible
      
      * Added pretrained flax models, results reproducible
      
      * working on tests
      
      * no real code change, just some comments
      
      * [flax] adding support for batch norm layers
      
      * fixing bugs related to pt+flax integration
      
      * removing loss from modeling flax output class
      
      * fixing classifier tests
      
      * fixing comments, model output
      
      * cleaning comments
      
      * review changes
      
      * review changes
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * renaming Flax to PyTorch
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      a0cbbba3
    • Mitch Naylor's avatar
      Add Mega: Moving Average Equipped Gated Attention (#21766) · 57f25f4b
      Mitch Naylor authored
      
      
      * add mega file structure and plain pytorch version of mega source code
      
      * added config class with old naming conventions
      
      * filled in mega documentation
      
      * added config class and embeddings with optional token types
      
      * updated notes
      
      * starting the conversion process, deleted intermediate and added use_cache back to config
      
      * renamed config attributes in modeling_mega.py
      
      * checkpointing before refactoring incremental decoding functions
      
      * removed stateful incremental key/values for EMA and self-attention
      
      * refactored MovingAverageGatedAttention to remove stateful k/v history and use unified attention mask
      
      * MovingAverageGatedAttention works with incremental decoding + past values, added sequence length enforcement
      
      * more comments in MovingAverageGatedAttention + checkpointing before GatedCrossAttention
      
      * bug fix in attention mask handling in MovingAverageGatedAttention
      
      * removed incremental state from GatedCrossAttention and removed IncrementalState class
      
      * finished gated cross attention and got MegaLayer working
      
      * fixed causal masking in mega decoder
      
      * fixed how padding and causal masks are passed through MegaLayer with and without k/v caching
      
      * finished MegaModel; tested with encoder, decoder-only, and cross-attention type inputs; started work on downstream classes; removed mentions of position_ids
      
      * added optional dense hidden layer for masked and causal LM classes
      
      * docstring updates in MultiHeadEMA and GatedCrossAttention, removed unnecessary inputs in cross-attention
      
      * removed before_attn_fn in Mega class and updated docstrings and comments up to there
      
      * bug fix in MovingAverageGatedAttention masking
      
      * working conversion of MLM checkpoint in scratchpad script -- perfect matches
      
      * moved arg for hidden dense layer in LM head to config; discovered issue where from_pretrained is renaming gamma and beta parameters
      
      * renamed gamma and beta parameters to avoid HF renaming when loading from checkpoint
      
      * finished checkpoint conversion script
      
      * cleanup old class in mega config script
      
      * removed 'copied from' statements and passing integration tests
      
      * added num_attention_heads=1 to config for integration compatibility, decoder tests working, generation tests failing
      
      * fixed tuple output of megamodel
      
      * all common tests passing after fixing issues in decoder, gradient retention, and initialization
      
      * added mega-specific tests, ready for more documentation and style checks
      
      * updated docstrings; checkpoint before style fixes
      
      * style and quality checks, fixed initialization problem in float_tensor, ready for PR
      
      * added mega to toctree
      
      * removed unnecessary arg in megaconfig
      
      * removed unused arg and fixed code samples with leftover roberta models
      
      * Apply suggestions from code review
      
      Applied all suggestions except the one renaming a class, as I'll need to update that througout
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fixed issue where .view breaks batch dimension, conversion script fixed with absolute imports, updated readme with Mega->MEGA
      
      * removed asserts in Mega code, renamed sequencenorm, gatedcrossattention, and NFFN, replaced get_activation_fn with ACTFN, and added sequencenorm to layer norms
      
      * reformatted .forward() docstrings to match style and removed unused mask input in cross-attention
      
      * removed all reset_parameters() methods and rolled into MegaPreTrainedModel._init_weights()
      
      * renamed all single-letter variables and improved readability in tensor size comments, Mega->MEGA in 2 documentation files
      
      * variable names in NFFN
      
      * manual Mega->MEGA changes in docs
      
      * Mega->MEGA in config auto
      
      * style and quality fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * renamed parameters and variables with confusing names, added copied from statements, moved fft conv to its own method, other cleanup from PR comments
      
      * commit before dealing with merge conflicts
      
      * made new attention activation functions available in ACT2FN and added generation test from OPT
      
      * style and quality in activations and tests
      
      * documentation fixes, renaming variables in dropout and rotary positions, used built-in causal masking, encoders->layers in MegaModel, moved comments into docstrings
      
      * style and quality fixes after latest updates, before rotary position ids
      
      * causal mask in MegaBlock docstring + added missing device passing
      
      * Apply suggestions from code review
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * added Mega prefixes where missing, reverted MegaSequenceNorm to if-else, other module renaming requested in PR
      
      * style and quality fixes + readme updates pointing to main
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      57f25f4b
  28. 22 Mar, 2023 1 commit
  29. 16 Mar, 2023 1 commit
    • Jason Phang's avatar
      LLaMA Implementation (#21955) · 0041be5b
      Jason Phang authored
      
      
      * LLaMA
      
      * sharding and docs
      
      * tweak
      
      * black
      
      * inits
      
      * ruff
      
      * LLAMA_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * init
      
      * no checkpoint
      
      * docs
      
      * ruff
      
      * type_vocab_size
      
      * tokenizer fixes
      
      * tokenizer fixes
      
      * Update tokenization_llama.py
      
      * Update tokenization_llama.py
      
      * Update configuration_llama.py
      
      * Update modeling_llama.py
      
      * tokenizer add_bos by default
      
      * licenses
      
      * remove decoder
      
      * norms and mlp
      
      * rope overhaul
      
      * tweaks
      
      * black
      
      * mention OPT implementation
      
      * off-by-one naming
      
      * typo
      
      * fix
      
      * tokenization fix and slicing bug
      
      * padding config
      
      * cleanup
      
      * black
      
      * update tests
      
      * undo typo
      
      * fix vocab caching logic
      
      * ruff
      
      * docbuilder
      
      * attn fix from BlackSamorez
      
      * initial feedback
      
      * typo
      
      * docs
      
      * llama case
      
      * llama case
      
      * load checkpoint docs
      
      * comment about tokenizer
      
      * tokenizer defaults
      
      * clear past_key_values if use_cache=False
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      ---------
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      0041be5b
  30. 14 Mar, 2023 1 commit
  31. 13 Mar, 2023 1 commit
    • wangpeng's avatar
      add new model of MGP-STR (#21418) · 102b5ff4
      wangpeng authored
      
      
      * add new model of MGP-STR
      
      * fix the check failings
      
      * remove torch and numpy from mgp_tokenization
      
      * remove unused import from modeling_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str.py
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str and add softmax outs to model
      
      * rm test_processing_mgp_str and add softmax outs to model
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * add new model of MGP-STR
      
      * fix the check failings
      
      * remove torch and numpy from mgp_tokenization
      
      * remove unused import from modeling_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str.py
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * add test_processing_mgp_str
      
      * rm test_processing_mgp_str and add softmax outs to model
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * rewrite the code of mgp-str according to PR suggestions
      
      * remove representation_size from MGPSTRConfig
      
      * reformat configuration_mgp_str.py
      
      * format test_processor_mgp_str.py
      
      * add test for tokenizer and complete model/processer test and model file
      
      * rm Unnecessary tupple in modeling_mgp_str
      
      * reduce hidden_size/layers/label_size in test_model
      
      * add integration tests and change MGPSTR to Mgpstr
      
      * add test for logit values
      
      * reformat test model file
      
      ---------
      Co-authored-by: default avataryue kun <yuekun.wp@alibaba-inc.com>
      102b5ff4
  32. 07 Mar, 2023 1 commit
    • Eli Simhayev's avatar
      [Time-Series] informer model (#21099) · 8abe4930
      Eli Simhayev authored
      * added informer to gitignore
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * moved enc-dec init to InformerEncoder/Decoder init
      
      * added 'init_std' to config, now model init works!
      
      * WIP conversion script, and added code sources
      
      * WIP conversion script: loading original informer pth works
      
      * WIP conversion script: change defaults in the config
      
      * WIP conversion script: supporting Informer input embedding
      
      * WIP conversion script: added parameters for the informer embed
      
      * WIP conversion script: change dim_feedforward=2048
      
      * WIP conversion script: remove unused args for loading checkpoint
      
      * just cleaning up
      
      * DataEmbedding removed, after thinking with Kashif
      
      * working on forward pass
      
      * WIP forward pass: trying to establish working batch for forward pass
      
      * cleaning and finalizing
      
      * adding HF names and docs
      
      * init after cleaning works
      
      * WIP in tests
      
      * added docs for the informer specific args
      
      * fix style
      
      * undo change
      
      * cleaning informer, now need to work only enc-dec
      
      * initial enc-dec classes
      
      * added encoder and decoder
      
      * added todo
      
      * add todos for conv_layers
      
      * added decoder docs from vanilla
      
      * added encoder docs from vanilla
      
      * remove encoder decoder from the original informer
      
      * removed AttentionLayer from the original paper
      
      * removed TriangularCausalMask, same as decoder_attention_mask
      
      * initial sparse attention
      
      * use conv_layers
      
      * fixed test_config test
      
      * fix parenthesis when itearting zip(layers, conv_layers)
      
      * error found in prob attention, added sizes as comments
      
      * fix sizes
      
      * added proposal for q_reduce indexing, and remove unused
      
      * WIP ProbMask, and changed factor=2 for testing
      
      * remove unused libs for this PR for creating the env
      
      * fix checking the attn_weights.size() after bmm
      
      * Q_reduce: changed from torch.gather to simple slicing
      
      * WIP calculate final attn_output
      
      * finish adding v_aggregated, attn_output ready
      
      * changed tgt_len to u in attention_mask, need to fix the size error
      
      * comment attention_mask for encoder, and fix if cond for v_agg
      
      * added ProbMask support (wip), removed old original code
      
      * finished ProbMask 😃
      
      
      
      * Revert "remove unused libs for this PR for creating the env"
      
      This reverts commit 11a081e09e92771e51a5d2758d53a9afb59547f0.
      
      * fixes
      
      * make style
      
      * fix initial tests
      
      * fix more tests
      
      * dry
      
      * make style
      
      * remove unused files
      
      * style
      
      * added integration tests
      
      * fix num_static_real_features
      
      * fix header
      
      * remove unused function
      
      * fix example
      
      * fix docs
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/modeling_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixes for reviewer
      
      * use prediction_length from model
      
      * fix style
      
      * fixed informer.mdx
      
      * added to index
      
      * updated readme
      
      * undo
      
      * make fix-copies
      
      * typo
      
      * fix copy
      
      * added Informer to toctree
      
      * in order
      
      * fixed comments
      
      * remove unneeded new lines in docs
      
      * make static real and cat optional
      
      * fix use of distil conv layers
      
      * fixed integration test
      
      * added checkpoint for convlayer
      
      * make fix-copies
      
      * updated from time series model
      
      * make fix-copies
      
      * copy decoder
      
      * fix unit tests
      
      * updated scaling config
      
      * fix integration tests
      
      * IGNORE_NON_TESTED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * updated check configs
      
      * fix formatting
      
      * undo change from time series
      
      * prediction_length should not be None
      
      * aliign with the blog: prettify ProbSparse and change attention_factor  to sampling_factor
      
      * make style
      
      * make fix-copies
      
      * niels CR: update contributed by
      
      * niels CR: update configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: update kashif -> huggingface
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: `sampling_factor` only relevant when `attention_type`=prob
      
      * make style
      
      * fixed U_part: added multiplication by `L_Q`
      
      * fixed bug: remove `is not None` from `if config.distil`
      
      * fixed test: `decoder_seq_length` to `encoder_seq_length` in cross_attentions check
      
      * fix integration tests
      
      * updated model hub
      
      * do not shift as in training
      
      * undo
      
      * fix make-copies
      
      * make fix-copies
      
      * added `if prediction_length is None`
      
      * changed `ProbSparseAttention` to `InformerProbSparseAttention`
      
      * changed `V_sum` -> `v_mean_dim_time`
      
      * changed `ConvLayer` to `InformerConvLayer` and fixed `super()`
      
      * TimeSeriesTansformer->Informer in decoder's Copied from
      
      * more descriptive in ProbSparse
      
      * make style
      
      * fix coped from
      
      * Revert "added `if prediction_length is None`"
      
      This reverts commit b4cbddfa05e3bd739b79569cd3c3b89e316f2451.
      
      * fixed indent
      
      * use InformerSinusoidalPositionalEmbedding
      
      * make fix-style
      
      * fix from #21860
      
      * fix name
      
      * make fix-copies
      
      * use time series utils
      
      * fix dec num_heads
      
      * docstring
      
      * added time series util doc
      
      * _import_structure
      
      * formatting
      
      * changes from review
      
      * make style
      
      * fix docs
      
      * fix doc
      
      * removed NegativeLogLikelihood
      
      ---------
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      8abe4930
  33. 03 Mar, 2023 1 commit
  34. 01 Mar, 2023 2 commits
  35. 21 Feb, 2023 1 commit