1. 03 Jun, 2022 2 commits
  2. 02 Jun, 2022 1 commit
  3. 01 Jun, 2022 2 commits
  4. 31 May, 2022 2 commits
    • Arthur's avatar
      Opt in flax and tf (#17388) · 7822a9b7
      Arthur authored
      
      
      * initial commit
      
      * add init file
      
      * update globakl init
      
      * update index and dummy objects
      
      * style
      
      * update modelling auto
      
      * fix initi typo in src/transformers
      
      * fix typo in modeling tf auto, opt was in wrong mapping name
      
      * fixed a slow test : saved_model
      
      * style
      
      * fix positionnal embedding if no position id is provided
      
      * update tf test
      
      * update test flax requirements
      
      * fixed serialization
      
      * update
      
      * update tf name to allow smooth convertion
      
      * update flax tests
      
      * style
      
      * fix test typo
      
      * fix tf typo test
      
      * add xla for generate support in causal LM
      
      * fixed bug
      
      * cleaned tf tests
      
      * style
      
      * removed from PT for slow tests
      
      * fix typp
      
      * opt test as slow
      
      * trying to fix GPT2 undefined
      
      * correct documentation and add to test doc
      
      * update tf doc
      
      * fix doc
      
      * fake commit
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * update test based on review
      
      * merged main layer for functionning test
      
      * fixup + quality
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update long comment
      
      * make fix copies
      Co-authored-by: default avatarArthur <arthur@huggingface.co>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      7822a9b7
    • Ritik Nandwal's avatar
      Added XLM onnx config (#17030) · 5af38953
      Ritik Nandwal authored
      * Add onnx configuration for xlm
      
      * Add supported features for xlm
      
      * Add xlm to models exportable with onnx
      
      * Add xlm architecture to test file
      
      * Modify docs
      
      * Make code quality fixes
      5af38953
  5. 25 May, 2022 1 commit
  6. 24 May, 2022 2 commits
    • Jason Phang's avatar
      [WIP] Adding GPT-NeoX-20B (#16659) · 71e60272
      Jason Phang authored
      
      
      * initial
      
      * first try
      
      * working 20B
      
      * 20B tokenizers
      
      * Docs
      
      * Import fixes for missing classes
      
      * Update docs, fixup
      
      * black formatting
      
      * isort
      
      * flake
      
      * dummy objects
      
      * documentation
      
      * Documentation yml
      
      * more docs
      
      * tweaks for tests
      
      * tokenization auto
      
      * fix neox tests
      
      * test
      
      * test
      
      * einsum
      
      * address PR feedback
      
      * Documentation
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_neox/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_neox/configuration_gpt_neox.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Remove undefined LaTeX syntax
      
      * Update to full url to avoid confusion about if that's supposed to refer to the Hub
      
      * fix auto
      
      * move tests
      
      * documentation fix
      
      * more doc fixes
      
      * test refactor
      
      * fix import
      
      * fix import
      
      * fix import
      
      * fix import
      
      * fix import
      
      * style fixes
      
      * More modeling fixes
      Co-authored-by: default avatarJason Phang <zp489@gr057.hpc.nyu.edu>
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      71e60272
    • NielsRogge's avatar
      Add LayoutLMv3 (#17060) · 31ee80d5
      NielsRogge authored
      
      
      * Make forward pass work
      
      * More improvements
      
      * Remove unused imports
      
      * Remove timm dependency
      
      * Improve loss calculation of token classifier
      
      * Fix most tests
      
      * Add docs
      
      * Add model integration test
      
      * Make all tests pass
      
      * Add LayoutLMv3FeatureExtractor
      
      * Improve integration test + make fixup
      
      * Add example script
      
      * Fix style
      
      * Add LayoutLMv3Processor
      
      * Fix style
      
      * Add option to add visual labels
      
      * Make more tokenizer tests pass
      
      * Fix more tests
      
      * Make more tests pass
      
      * Fix bug and improve docs
      
      * Fix import of processors
      
      * Improve docstrings
      
      * Fix toctree and improve docs
      
      * Fix auto tokenizer
      
      * Move tests to model folder
      
      * Move tests to model folder
      
      * change default behavior add_prefix_space
      
      * add prefix space for fast
      
      * add_prefix_spcae set to True for Fast
      
      * no space before `unique_no_split` token
      
      * add test to hightligh special treatment of added tokens
      
      * fix `test_batch_encode_dynamic_overflowing` by building a long enough example
      
      * fix `test_full_tokenizer` with add_prefix_token
      
      * Fix tokenizer integration test
      
      * Make the code more readable
      
      * Add tests for LayoutLMv3Processor
      
      * Fix style
      
      * Add model to README and update init
      
      * Apply suggestions from code review
      
      * Replace asserts by value errors
      
      * Add suggestion by @ducviet00
      
      * Add model to doc tests
      
      * Simplify script
      
      * Improve README
      
      * a step ahead to fix
      
      * Update pair_input_test
      
      * Make all tokenizer tests pass - phew
      
      * Make style
      
      * Add LayoutLMv3 to CI job
      
      * Fix auto mapping
      
      * Fix CI job name
      
      * Make all processor tests pass
      
      * Make tests of LayoutLMv2 and LayoutXLM consistent
      
      * Add copied from statements to fast tokenizer
      
      * Add copied from statements to slow tokenizer
      
      * Remove add_visual_labels attribute
      
      * Fix tests
      
      * Add link to notebooks
      
      * Improve docs of LayoutLMv3Processor
      
      * Fix reference to section
      Co-authored-by: default avatarSaulLu <lucilesaul.com@gmail.com>
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      31ee80d5
  7. 23 May, 2022 3 commits
  8. 18 May, 2022 1 commit
  9. 17 May, 2022 3 commits
  10. 16 May, 2022 6 commits
  11. 12 May, 2022 2 commits
  12. 11 May, 2022 2 commits
    • Amanpreet Singh's avatar
      [feat] Add FLAVA model (#16654) · a10f6183
      Amanpreet Singh authored
      * [WIP] Add FLAVA model
      
      This PR aims to add [FLAVA](ihttps://arxiv.org/abs/2112.04482) model to the transformers repo.
      
      Following checklist delineates the list of things to be done for this PR
      to be complete:
      
      [x] Flava init
      [x] Flava base models
      [x] Flava layers
      [x] Flava Configs
      [x] Flava encoders
      [x] Flava pretraining models
      [ ] Flava classification/retrieval models (To be added in a separate PR)
      [x] Documentation updates 
      [x] Imports updates 
      [x] Argstring updates
      [x] Flava pretrained checkpoints 
      [x] Flava tests
      [x] Flava processors 
      [x] Sanity check
      [x] Lint
      a10f6183
    • hasan salim kanmaz's avatar
      [WIP] Enable reproducibility for distributed trainings (#16907) · c33f6046
      hasan salim kanmaz authored
      
      
      * add seed worker and set_deterministic_seed_for_cuda function to enforce reproducability
      
      * change function name to enable determinism, add docstrings, reproducability support for tf
      
      * change function name to enable_determinism_for_distributed_training
      
      * revert changes in set_seed and call set_seed within enable_full_determinism
      
      * add one position argument for seed_worker function
      
      * add full_determinism flag in training args and call enable_full_determinism when it is true
      
      * add enable_full_determinism to documentation
      
      * apply make fixup after the last commit
      
      * Update src/transformers/training_args.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      c33f6046
  13. 10 May, 2022 2 commits
  14. 09 May, 2022 4 commits
  15. 06 May, 2022 1 commit
  16. 05 May, 2022 1 commit
  17. 04 May, 2022 2 commits
  18. 03 May, 2022 2 commits
    • Sylvain Gugger's avatar
      Make Trainer compatible with sharded checkpoints (#17053) · a8fa2f91
      Sylvain Gugger authored
      * Make Trainer compatible with sharded checkpoints
      
      * Add doc
      a8fa2f91
    • Sanchit Gandhi's avatar
      [FlaxBert] Add ForCausalLM (#16995) · cd9274d0
      Sanchit Gandhi authored
      * [FlaxBert] Add ForCausalLM
      
      * make style
      
      * fix output attentions
      
      * Add RobertaForCausalLM
      
      * remove comment
      
      * fix fx-to-pt model loading
      
      * remove comment
      
      * add modeling tests
      
      * add enc-dec model tests
      
      * add big_bird
      
      * add electra
      
      * make style
      
      * make repo-consitency
      
      * add to docs
      
      * remove roberta test
      
      * quality
      
      * amend cookiecutter
      
      * fix attention_mask bug in flax bert model tester
      
      * tighten pt-fx thresholds to 1e-5
      
      * add 'copied from' statements
      
      * amend 'copied from' statements
      
      * amend 'copied from' statements
      
      * quality
      cd9274d0
  19. 02 May, 2022 1 commit