1. 28 Jun, 2022 1 commit
  2. 27 Jun, 2022 1 commit
    • Matt's avatar
      Add a TF in-graph tokenizer for BERT (#17701) · ee0d001d
      Matt authored
      * Add a TF in-graph tokenizer for BERT
      
      * Add from_pretrained
      
      * Add proper truncation, option handling to match other tokenizers
      
      * Add proper imports and guards
      
      * Add test, fix all the bugs exposed by said test
      
      * Fix truncation of paired texts in graph mode, more test updates
      
      * Small fixes, add a (very careful) test for savedmodel
      
      * Add tensorflow-text dependency, make fixup
      
      * Update documentation
      
      * Update documentation
      
      * make fixup
      
      * Slight changes to tests
      
      * Add some docstring examples
      
      * Update tests
      
      * Update tests and add proper lowercasing/normalization
      
      * make fixup
      
      * Add docstring for padding!
      
      * Mark slow tests
      
      * make fixup
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * make fixup
      
      * Properly handle tensorflow-text dummies
      ee0d001d
  3. 24 Jun, 2022 2 commits
  4. 23 Jun, 2022 1 commit
    • Quentin's avatar
      add doctests for DETR (#17786) · ab223fc1
      Quentin authored
      * add: check labels for detr object detection doctests
      
      * add: check shapes
      
      * add: add detr to documentation_tests.py
      
      * fix: make fixup output
      
      * fix: add a comment
      ab223fc1
  5. 18 Jun, 2022 1 commit
  6. 15 Jun, 2022 1 commit
  7. 13 Jun, 2022 2 commits
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
    • Bram Vanroy's avatar
      explicitly set utf8 for Windows (#17664) · 73083581
      Bram Vanroy authored
      73083581
  8. 09 Jun, 2022 1 commit
  9. 07 Jun, 2022 1 commit
    • Chan Woo Kim's avatar
      M-CTC-T Model (#16402) · 119e3c0f
      Chan Woo Kim authored
      
      
      * added cbs to notebooks, made copy-paste error fix in generation_utils
      
      * initial push for mctc model
      
      * mctc feature extractor done
      
      * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly.
      
      * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly.
      
      * passing attention, now struggling to figure out how attention masks make sense here
      
      * works when excluding attention masks. ask later how one would integrate attention maskshere
      
      * bizarre configuration error (model prefix comes first in config dict json and messes up the order)
      
      * all passing but bizzarre config dict ordering issue when to_dict
      
      * passing all major tests
      
      * feature extraction, processor, tokenizer added & tests passing
      
      * style & consistency & other logistical fixes
      
      * copy paste fix
      
      * model after feature extraction working
      
      * commiting final feature extraction results; need to fix normalization
      
      * feature extraction passing tests; probably should add tests on the specific flashlight-copied functions?
      
      * delete print ; format code a bit
      
      * fixing tests
      
      * passing major tests
      
      * fixing styles
      
      * completed tokenization test with real example; not sure if these values are entirely correct.
      
      * last test fixes from local
      
      * reverting accidentally included custom setup configs
      
      * remove load tf weights; fix config error
      
      * testing couldnt import featureextractor
      
      * fix docs
      
      * fix docs
      
      * resolving comments
      
      * style fixes
      
      * style fixes
      
      * Update to MCTCConv1dSubSampler
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * relposemb fixes
      
      * conv1d name issue; expecting config fail with paraentheses
      
      * fix config issue
      
      * fix config issue
      
      * fix config issue
      
      * change everything to MCTCT
      
      * fixing naming change errors
      
      * archive list
      
      * copyrights and docs
      
      * copyrights and docs
      
      * copyrights and docs
      
      * merge resolution
      
      * move tests, fix to changed optionaldependency structure
      
      * test directories changed
      
      * fixing tests
      
      * how to avoid tf tests?
      
      * how to avoid tf tests?
      
      * tests passing locally
      
      * allow mctctprocessor imported any env
      
      * allow mctctprocessor imported any env
      
      * fixed second round of feedback, need to fix docs
      
      * doc changes not being applied
      
      * all fixed
      
      * style fix
      
      * feedback fixes
      
      * fix copies and feature extraction style fix
      
      * Update tests/models/visual_bert/test_modeling_visual_bert.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * copy paste huggingface:main visual bert
      
      * added eof newline to visual bert; all tests are passing otherwise
      
      * fix slow tests by adding attention mask
      
      * change model id to speechbrain
      
      * make fix-copies
      
      * fix readme unwanted deletes
      
      * fixing readmes, make fix-copies
      
      * consistent M-CTC-T naming
      
      * Update src/transformers/models/mctct/__init__.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * all fixed but variable naming
      
      * adjust double quotes
      
      * fixed variable names
      
      * copyright and mr quilter
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * correct slow tests
      
      * make fix-copies
      
      * Update src/transformers/models/mctct/configuration_mctct.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/mctct/configuration_mctct.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * m-ctc-t not mctct
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      119e3c0f
  10. 03 Jun, 2022 1 commit
  11. 02 Jun, 2022 3 commits
  12. 31 May, 2022 1 commit
    • Arthur's avatar
      Opt in flax and tf (#17388) · 7822a9b7
      Arthur authored
      
      
      * initial commit
      
      * add init file
      
      * update globakl init
      
      * update index and dummy objects
      
      * style
      
      * update modelling auto
      
      * fix initi typo in src/transformers
      
      * fix typo in modeling tf auto, opt was in wrong mapping name
      
      * fixed a slow test : saved_model
      
      * style
      
      * fix positionnal embedding if no position id is provided
      
      * update tf test
      
      * update test flax requirements
      
      * fixed serialization
      
      * update
      
      * update tf name to allow smooth convertion
      
      * update flax tests
      
      * style
      
      * fix test typo
      
      * fix tf typo test
      
      * add xla for generate support in causal LM
      
      * fixed bug
      
      * cleaned tf tests
      
      * style
      
      * removed from PT for slow tests
      
      * fix typp
      
      * opt test as slow
      
      * trying to fix GPT2 undefined
      
      * correct documentation and add to test doc
      
      * update tf doc
      
      * fix doc
      
      * fake commit
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * update test based on review
      
      * merged main layer for functionning test
      
      * fixup + quality
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update long comment
      
      * make fix copies
      Co-authored-by: default avatarArthur <arthur@huggingface.co>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      7822a9b7
  13. 25 May, 2022 2 commits
  14. 24 May, 2022 1 commit
    • NielsRogge's avatar
      Add LayoutLMv3 (#17060) · 31ee80d5
      NielsRogge authored
      
      
      * Make forward pass work
      
      * More improvements
      
      * Remove unused imports
      
      * Remove timm dependency
      
      * Improve loss calculation of token classifier
      
      * Fix most tests
      
      * Add docs
      
      * Add model integration test
      
      * Make all tests pass
      
      * Add LayoutLMv3FeatureExtractor
      
      * Improve integration test + make fixup
      
      * Add example script
      
      * Fix style
      
      * Add LayoutLMv3Processor
      
      * Fix style
      
      * Add option to add visual labels
      
      * Make more tokenizer tests pass
      
      * Fix more tests
      
      * Make more tests pass
      
      * Fix bug and improve docs
      
      * Fix import of processors
      
      * Improve docstrings
      
      * Fix toctree and improve docs
      
      * Fix auto tokenizer
      
      * Move tests to model folder
      
      * Move tests to model folder
      
      * change default behavior add_prefix_space
      
      * add prefix space for fast
      
      * add_prefix_spcae set to True for Fast
      
      * no space before `unique_no_split` token
      
      * add test to hightligh special treatment of added tokens
      
      * fix `test_batch_encode_dynamic_overflowing` by building a long enough example
      
      * fix `test_full_tokenizer` with add_prefix_token
      
      * Fix tokenizer integration test
      
      * Make the code more readable
      
      * Add tests for LayoutLMv3Processor
      
      * Fix style
      
      * Add model to README and update init
      
      * Apply suggestions from code review
      
      * Replace asserts by value errors
      
      * Add suggestion by @ducviet00
      
      * Add model to doc tests
      
      * Simplify script
      
      * Improve README
      
      * a step ahead to fix
      
      * Update pair_input_test
      
      * Make all tokenizer tests pass - phew
      
      * Make style
      
      * Add LayoutLMv3 to CI job
      
      * Fix auto mapping
      
      * Fix CI job name
      
      * Make all processor tests pass
      
      * Make tests of LayoutLMv2 and LayoutXLM consistent
      
      * Add copied from statements to fast tokenizer
      
      * Add copied from statements to slow tokenizer
      
      * Remove add_visual_labels attribute
      
      * Fix tests
      
      * Add link to notebooks
      
      * Improve docs of LayoutLMv3Processor
      
      * Fix reference to section
      Co-authored-by: default avatarSaulLu <lucilesaul.com@gmail.com>
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      31ee80d5
  15. 23 May, 2022 1 commit
    • ghlai9665's avatar
      Correct & Improve Doctests for LayoutLMv2 (#17168) · 7b8cb269
      ghlai9665 authored
      
      
      * add inference example to LayoutLMv2ForQuestionAnswering, passing doctest
      
      * add loss example to LayoutLMv2ForQuestionAnswering, passing doctest
      
      * Add correct doctest for LayoutLMv2ForTokenClassification, passing doctest
      
      * add correct doctest for LayoutLMv2ForSequenceClassification, passing test
      
      * add correct doctest for LayoutLMv2Model, passing test
      
      * make fixup
      
      * fix to address review comments
      
      * make style
      
      * fix doctest line break issue, add to documentaiton_tests.txt, address review comments
      
      * move comment about layoutlmv2 dependencies to the doc page
      
      * format doc page as suggested
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * delete extraneous backtick
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      7b8cb269
  16. 18 May, 2022 4 commits
  17. 17 May, 2022 5 commits
  18. 16 May, 2022 3 commits
  19. 13 May, 2022 4 commits
  20. 12 May, 2022 2 commits
  21. 11 May, 2022 1 commit
    • Amanpreet Singh's avatar
      [feat] Add FLAVA model (#16654) · a10f6183
      Amanpreet Singh authored
      * [WIP] Add FLAVA model
      
      This PR aims to add [FLAVA](ihttps://arxiv.org/abs/2112.04482) model to the transformers repo.
      
      Following checklist delineates the list of things to be done for this PR
      to be complete:
      
      [x] Flava init
      [x] Flava base models
      [x] Flava layers
      [x] Flava Configs
      [x] Flava encoders
      [x] Flava pretraining models
      [ ] Flava classification/retrieval models (To be added in a separate PR)
      [x] Documentation updates 
      [x] Imports updates 
      [x] Argstring updates
      [x] Flava pretrained checkpoints 
      [x] Flava tests
      [x] Flava processors 
      [x] Sanity check
      [x] Lint
      a10f6183
  22. 10 May, 2022 1 commit
    • Nicolas Brousse's avatar
      Add MLFLOW_FLATTEN_PARAMS support in MLflowCallback (#17148) · e99f0efe
      Nicolas Brousse authored
      * add support for MLFLOW_FLATTEN_PARAMS
      
      * ensure key is str
      
      * fix style and update warning msg
      
      * Empty commit to trigger CI
      
      * fix bug in check_inits.py
      
      * add unittest for flatten_dict utils
      
      * fix 'NoneType' object is not callable on __del__
      
      * add generic flatten_dict unittest to SPECIAL_MODULE_TO_TEST_MAP
      
      * fix style
      e99f0efe