1. 28 Jun, 2022 1 commit
  2. 13 Jun, 2022 2 commits
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
    • Bram Vanroy's avatar
      explicitly set utf8 for Windows (#17664) · 73083581
      Bram Vanroy authored
      73083581
  3. 07 Jun, 2022 1 commit
    • Chan Woo Kim's avatar
      M-CTC-T Model (#16402) · 119e3c0f
      Chan Woo Kim authored
      
      
      * added cbs to notebooks, made copy-paste error fix in generation_utils
      
      * initial push for mctc model
      
      * mctc feature extractor done
      
      * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly.
      
      * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly.
      
      * passing attention, now struggling to figure out how attention masks make sense here
      
      * works when excluding attention masks. ask later how one would integrate attention maskshere
      
      * bizarre configuration error (model prefix comes first in config dict json and messes up the order)
      
      * all passing but bizzarre config dict ordering issue when to_dict
      
      * passing all major tests
      
      * feature extraction, processor, tokenizer added & tests passing
      
      * style & consistency & other logistical fixes
      
      * copy paste fix
      
      * model after feature extraction working
      
      * commiting final feature extraction results; need to fix normalization
      
      * feature extraction passing tests; probably should add tests on the specific flashlight-copied functions?
      
      * delete print ; format code a bit
      
      * fixing tests
      
      * passing major tests
      
      * fixing styles
      
      * completed tokenization test with real example; not sure if these values are entirely correct.
      
      * last test fixes from local
      
      * reverting accidentally included custom setup configs
      
      * remove load tf weights; fix config error
      
      * testing couldnt import featureextractor
      
      * fix docs
      
      * fix docs
      
      * resolving comments
      
      * style fixes
      
      * style fixes
      
      * Update to MCTCConv1dSubSampler
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * relposemb fixes
      
      * conv1d name issue; expecting config fail with paraentheses
      
      * fix config issue
      
      * fix config issue
      
      * fix config issue
      
      * change everything to MCTCT
      
      * fixing naming change errors
      
      * archive list
      
      * copyrights and docs
      
      * copyrights and docs
      
      * copyrights and docs
      
      * merge resolution
      
      * move tests, fix to changed optionaldependency structure
      
      * test directories changed
      
      * fixing tests
      
      * how to avoid tf tests?
      
      * how to avoid tf tests?
      
      * tests passing locally
      
      * allow mctctprocessor imported any env
      
      * allow mctctprocessor imported any env
      
      * fixed second round of feedback, need to fix docs
      
      * doc changes not being applied
      
      * all fixed
      
      * style fix
      
      * feedback fixes
      
      * fix copies and feature extraction style fix
      
      * Update tests/models/visual_bert/test_modeling_visual_bert.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * copy paste huggingface:main visual bert
      
      * added eof newline to visual bert; all tests are passing otherwise
      
      * fix slow tests by adding attention mask
      
      * change model id to speechbrain
      
      * make fix-copies
      
      * fix readme unwanted deletes
      
      * fixing readmes, make fix-copies
      
      * consistent M-CTC-T naming
      
      * Update src/transformers/models/mctct/__init__.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * all fixed but variable naming
      
      * adjust double quotes
      
      * fixed variable names
      
      * copyright and mr quilter
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * correct slow tests
      
      * make fix-copies
      
      * Update src/transformers/models/mctct/configuration_mctct.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/mctct/configuration_mctct.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * m-ctc-t not mctct
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      119e3c0f
  4. 12 May, 2022 2 commits
  5. 11 May, 2022 1 commit
    • Amanpreet Singh's avatar
      [feat] Add FLAVA model (#16654) · a10f6183
      Amanpreet Singh authored
      * [WIP] Add FLAVA model
      
      This PR aims to add [FLAVA](ihttps://arxiv.org/abs/2112.04482) model to the transformers repo.
      
      Following checklist delineates the list of things to be done for this PR
      to be complete:
      
      [x] Flava init
      [x] Flava base models
      [x] Flava layers
      [x] Flava Configs
      [x] Flava encoders
      [x] Flava pretraining models
      [ ] Flava classification/retrieval models (To be added in a separate PR)
      [x] Documentation updates 
      [x] Imports updates 
      [x] Argstring updates
      [x] Flava pretrained checkpoints 
      [x] Flava tests
      [x] Flava processors 
      [x] Sanity check
      [x] Lint
      a10f6183
  6. 03 May, 2022 2 commits
    • Yih-Dar's avatar
      Move test model folders (#17034) · 19420fd9
      Yih-Dar authored
      
      
      * move test model folders (TODO: fix imports and others)
      
      * fix (potentially partially) imports (in model test modules)
      
      * fix (potentially partially) imports (in tokenization test modules)
      
      * fix (potentially partially) imports (in feature extraction test modules)
      
      * fix import utils.test_modeling_tf_core
      
      * fix path ../fixtures/
      
      * fix imports about generation.test_generation_flax_utils
      
      * fix more imports
      
      * fix fixture path
      
      * fix get_test_dir
      
      * update module_to_test_file
      
      * fix get_tests_dir from wrong transformers.utils
      
      * update config.yml (CircleCI)
      
      * fix style
      
      * remove missing imports
      
      * update new model script
      
      * update check_repo
      
      * update SPECIAL_MODULE_TO_TEST_MAP
      
      * fix style
      
      * add __init__
      
      * update self-scheduled
      
      * fix add_new_model scripts
      
      * check one way to get location back
      
      * python setup.py build install
      
      * fix import in test auto
      
      * update self-scheduled.yml
      
      * update slack notification script
      
      * Add comments about artifact names
      
      * fix for yolos
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      19420fd9
    • Sanchit Gandhi's avatar
      [FlaxBert] Add ForCausalLM (#16995) · cd9274d0
      Sanchit Gandhi authored
      * [FlaxBert] Add ForCausalLM
      
      * make style
      
      * fix output attentions
      
      * Add RobertaForCausalLM
      
      * remove comment
      
      * fix fx-to-pt model loading
      
      * remove comment
      
      * add modeling tests
      
      * add enc-dec model tests
      
      * add big_bird
      
      * add electra
      
      * make style
      
      * make repo-consitency
      
      * add to docs
      
      * remove roberta test
      
      * quality
      
      * amend cookiecutter
      
      * fix attention_mask bug in flax bert model tester
      
      * tighten pt-fx thresholds to 1e-5
      
      * add 'copied from' statements
      
      * amend 'copied from' statements
      
      * amend 'copied from' statements
      
      * quality
      cd9274d0
  7. 28 Apr, 2022 1 commit
  8. 27 Apr, 2022 1 commit
  9. 18 Apr, 2022 1 commit
  10. 04 Apr, 2022 1 commit
  11. 28 Mar, 2022 1 commit
    • NielsRogge's avatar
      Add DPT (#15991) · 979b039c
      NielsRogge authored
      
      
      * First draft
      
      * More improvements
      
      * Add fusion blocks
      
      * Make conversion script work for dpt_large
      
      * Make conversion script work
      
      * Improve implementation
      
      * Improve conversion script
      
      * Add DPTForSemanticSegmentation
      
      * Make conversion work for semantic segmentation
      
      * Add tests
      
      * Remove print statements
      
      * First draft
      
      * Redesign neck
      
      * Improve tests
      
      * Improve implementation some more
      
      * Make neck output list of tensors
      
      * Improve neck and feature extractor
      
      * Fix integration tests
      
      * Make more tests pass
      
      * Make all tests pass
      
      * Add missing config archive map
      
      * Add in_index attribute to make heads accept list of tensors
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply some more suggestions
      
      * Add copied from statements
      
      * Remove assert
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Remove DPTInterpolate in favor of nn.Upsample
      
      * Add comments
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Add proposed design
      
      * Update design
      
      * Add DPTReassembleLayer
      
      * Add DPTFeatureFusionStage
      
      * Apply more suggestions from code review
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Fix rebase
      
      * Update in_index and out_indices
      
      * Fix conversion script
      
      * Fix code quality
      
      * Add model to toctree and use DepthEstimatorOutput
      
      * Fix rebase
      
      * Fix code examples
      
      * Improve code
      
      * Fix copied from statements
      
      * Apply suggestions from code review
      
      * Remove compute_loss method
      
      * Apply suggestions from code review
      
      * Fix documentation tests file
      
      * Remove test.py file
      
      * Improve doc example
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarNiels Rogge <nielsrogge@nielss-mbp.home>
      979b039c
  12. 23 Mar, 2022 2 commits
    • Edward Beeching's avatar
      Decision transformer gym (#15845) · aff9bc40
      Edward Beeching authored
      
      
      * Created the Decision Transformer Modle
      
      * updating tests, copy to other machine
      
      * Added last hidden size to Decision Transformer modelling outputs
      
      * Removed copy of original DT file
      
      * made a temporary change to gpt2 to have it conform with the Decision Transformer version
      
      * Updated tests
      
      * Ignoring a file used to test the DT model
      
      * added comments to config file
      
      * added comments and argument descriptions to decision transformer file
      
      * Updated doc
      
      * Ran "make style"
      
      * Remove old model imports
      
      * Removed unused imports, cleaned up init file
      
      * Update docs/source/model_doc/decision_transformer.mdx
      
      added my username
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Reverted changes made to gpt2
      
      * Removed datasets submodule
      
      * Update the modeling outputs to include gpt2 attentions, hidden states and last hidden states
      
      * Added support for return of hidden states, attentions and return dict of gpt2 model.
      
      * Updated tests to include many of the ModelTesterMixin tests. 
      
      The following tests are skipped: test_generate_without_input_ids, test_pruning, test_resize_embeddings, test_head_masking, test_attention_outputs, test_hidden_states_output, test_inputs_embeds, test_model_common_attributes
      
      * Added missing line to the end of gpt2 file
      
      * Added an integration test for the Decision Transformer
      
      Test performs and autoregressive evaluation for two time steps
      
      * Set done and info to _ to fix failing test
      
      * Updated integration test to be deterministic and check expected outputs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Removed unnecessary config options
      
      * Cleaned up commented code and old comments.
      
      * Cleaned up commented code.
      
      * Changed DecisionTransformer to Decision Transformer
      
      * Added Decision Transformer to the main README file
      
      * Added copy of GTP2 called DecisionTranformerGPT2Model
      
      * isorted imports
      
      * isorted imports
      
      * Added model to non-English README files
      
      * Ran make fix-copies and corrected some cases.
      
      * Updated index file to include Decision Transformer
      
      * Added gpt2 model as copy inside the Decision Transformer model file
      
      * Added the unit test file to the list of TEST_FILES_WITH_NO_COMMON_TESTS
      
      * Deleted redundant checkpoint files (I don't know how these got committed)
      
      * Removed testing files. (These should have never been committed)
      
      * Removed accidentally committed files
      
      * Moved the Decision Transformer test to its own directory
      
      * Add type hints for Pegasus (#16324)
      
      * Funnel type hints (#16323)
      
      * add pt funnel type hints
      
      * add tf funnel type hints
      
      * Add type hints for ProphetNet PyTorch (#16272)
      
      * [GLPN] Improve docs (#16331)
      
      * Add link to notebook
      
      * Add link
      
      * Fix bug
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      
      * Added type hints for Pytorch Marian calls (#16200)
      
      * Added type hinting for forward functions in pytorch marian
      
      * typo correction
      
      * Removed type hints on functions from BART per Suraj Patil request
      
      * fix import pb
      
      * fix typo
      
      * corrected tuple call
      
      * ran black
      
      * after fix-copies
      Some optional tags on primitives were removed, past_key_values in MarianForCausalLM changed from Tuple of Tuple to List
      
      * Fixing copies to roformer and pegasus
      Co-authored-by: default avatarClementine Fourrier <cfourrie@inria.fr>
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      
      * Moved DecisionTransformOutput to modeling_decision_transformer
      
      * Moved the example usage to research project and cleaned comments
      
      * Made tests ignore the copy of gpt2 in Decision Transformer
      
      * Added module output to modelling decision transformer
      
      * removed copied gpt2 model from list of transformers models
      
      * Updated tests and created __init__ file for new test location
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/decision_transformer/configuration_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Removed unneeded summary type from config file
      
      * Fixed copies
      
      * Updated pretrained config map to refer to hopper-medium checkpoint
      
      * done (#16340)
      
      * Added Decision transformer to model docs
      
      * Update src/transformers/models/decision_transformer/modeling_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/decision_transformer/modeling_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/decision_transformer/configuration_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Add type annotations for Rembert/Splinter and copies (#16338)
      
      * undo black autoformat
      
      * minor fix to rembert forward with default
      
      * make fix-copies, make quality
      
      * Adding types to template model
      
      * Removing List from the template types
      
      * Remove `Optional` from a couple of types that don't accept `None`
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      
      * [Bug template] Shift responsibilities for long-range (#16344)
      
      * Fix code repetition in serialization guide (#16346)
      
      * Adopt framework-specific blocks for content (#16342)
      
      *  refactor code samples with framework-specific blocks
      
      *  update training.mdx
      
      * 🖍
      
       apply feedback
      
      * Updates the default branch from master to main (#16326)
      
      * Updates the default branch from master to main
      
      * Links from `master` to `main`
      
      * Typo
      
      * Update examples/flax/README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Updated model with custom docstring example
      
      * Created the Decision Transformer Modle
      
      * updating tests, copy to other machine
      
      * Added last hidden size to Decision Transformer modelling outputs
      
      * Removed copy of original DT file
      
      * made a temporary change to gpt2 to have it conform with the Decision Transformer version
      
      * Updated tests
      
      * Ignoring a file used to test the DT model
      
      * added comments to config file
      
      * added comments and argument descriptions to decision transformer file
      
      * Updated doc
      
      * Ran "make style"
      
      * Remove old model imports
      
      * Removed unused imports, cleaned up init file
      
      * Update docs/source/model_doc/decision_transformer.mdx
      
      added my username
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Reverted changes made to gpt2
      
      * Removed datasets submodule
      
      * Update the modeling outputs to include gpt2 attentions, hidden states and last hidden states
      
      * Added support for return of hidden states, attentions and return dict of gpt2 model.
      
      * Updated tests to include many of the ModelTesterMixin tests. 
      
      The following tests are skipped: test_generate_without_input_ids, test_pruning, test_resize_embeddings, test_head_masking, test_attention_outputs, test_hidden_states_output, test_inputs_embeds, test_model_common_attributes
      
      * Added missing line to the end of gpt2 file
      
      * Added an integration test for the Decision Transformer
      
      Test performs and autoregressive evaluation for two time steps
      
      * Set done and info to _ to fix failing test
      
      * Updated integration test to be deterministic and check expected outputs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Removed unnecessary config options
      
      * Cleaned up commented code and old comments.
      
      * Cleaned up commented code.
      
      * Changed DecisionTransformer to Decision Transformer
      
      * Added Decision Transformer to the main README file
      
      * Added copy of GTP2 called DecisionTranformerGPT2Model
      
      * isorted imports
      
      * isorted imports
      
      * Added model to non-English README files
      
      * Ran make fix-copies and corrected some cases.
      
      * Updated index file to include Decision Transformer
      
      * Added gpt2 model as copy inside the Decision Transformer model file
      
      * Added the unit test file to the list of TEST_FILES_WITH_NO_COMMON_TESTS
      
      * Deleted redundant checkpoint files (I don't know how these got committed)
      
      * Removed testing files. (These should have never been committed)
      
      * Removed accidentally committed files
      
      * Moved the Decision Transformer test to its own directory
      
      * Moved DecisionTransformOutput to modeling_decision_transformer
      
      * Moved the example usage to research project and cleaned comments
      
      * Made tests ignore the copy of gpt2 in Decision Transformer
      
      * Added module output to modelling decision transformer
      
      * removed copied gpt2 model from list of transformers models
      
      * Updated tests and created __init__ file for new test location
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/decision_transformer/configuration_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Removed unneeded summary type from config file
      
      * Fixed copies
      
      * Updated pretrained config map to refer to hopper-medium checkpoint
      
      * Added Decision transformer to model docs
      
      * Update src/transformers/models/decision_transformer/modeling_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/decision_transformer/modeling_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/decision_transformer/configuration_decision_transformer.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Updated model with custom docstring example
      
      * Updated copies, config auto, and readme files.
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarDan Tegzes <48134725+Tegzes@users.noreply.github.com>
      Co-authored-by: default avatarAdam Montgomerie <adam@avanssion.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      Co-authored-by: default avatarClémentine Fourrier <22726840+clefourrier@users.noreply.github.com>
      Co-authored-by: default avatarClementine Fourrier <cfourrie@inria.fr>
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarFrancesco Saverio Zuppichini <francesco.zuppichini@gmail.com>
      Co-authored-by: default avatarJacob Dineen <54680234+jacobdineen@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarOmar Sanseviero <osanseviero@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre.debut@reseau.eseo.fr>
      aff9bc40
    • Sylvain Gugger's avatar
      Reorganize file utils (#16264) · 4975002d
      Sylvain Gugger authored
      * Split file_utils in several submodules
      
      * Fixes
      
      * Add back more objects
      
      * More fixes
      
      * Who exactly decided to import that from there?
      
      * Second suggestion to code with code review
      
      * Revert wront move
      
      * Fix imports
      
      * Adapt all imports
      
      * Adapt all imports everywhere
      
      * Revert this import, will fix in a separate commit
      4975002d
  13. 22 Mar, 2022 1 commit
    • NielsRogge's avatar
      Add GLPN (#16199) · 0c55d47c
      NielsRogge authored
      
      
      * First draft
      
      * Fix logits calculation
      
      * Improve tests
      
      * Add copied from statements
      
      * Fix base_model_prefix
      
      * Improve implementation, upload new models
      
      * Update design
      
      * Fix integration test
      
      * Add model to README and toctree
      
      * Add document image
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Add decoder_hidden_size attribute
      
      * Update design of decoder
      
      * Add DepthEstimatorOutput class
      
      * Rename in_index to head_in_index and add feature extractor tests
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Update pretrained model name and add to doc tests
      
      * Remove test.py script
      
      * Update copied from statements and clean up
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      0c55d47c
  14. 09 Mar, 2022 1 commit
    • Sanchit Gandhi's avatar
      Add FlaxBartForCausalLM (#15995) · b256f351
      Sanchit Gandhi authored
      * add causal lm
      
      * add CausalLM tests
      
      * Add FlaxBartForCausalLM
      
      * Add EncoderDecoder model tests
      
      * change docstring
      
      * make repo-consistency
      
      * suggested changes
      
      * remove jax ops
      
      * correction
      
      * rename pre-trained decoder model
      b256f351
  15. 04 Mar, 2022 1 commit
  16. 02 Mar, 2022 1 commit
    • Francesco Saverio Zuppichini's avatar
      Maskformer (#15682) · d83d22f5
      Francesco Saverio Zuppichini authored
      
      
      * maskformer
      
      * conflicts
      
      * conflicts
      
      * minor fixes
      
      * feature extractor test fix
      
      refactor MaskFormerLoss following conversation
      
      MaskFormer related types should not trigger a module time import error
      
      missed one
      
      removed all the types that are not used
      
      update config mapping
      
      minor updates in the doc
      
      resolved conversation that doesn't need a discussion
      
      minor changes
      
      resolved conversations
      
      fixed DetrDecoder
      
      * minor changes
      
      minor changes
      
      fixed mdx file
      
      test feature_extractor return types
      
      functional losses -> classes
      
      removed the return type test for the feature extractor
      
      minor changes + style + quality
      
      * conflicts?
      
      * rebase master
      
      * readme
      
      * added missing files
      
      * deleded poolformers test that where in the wrong palce
      
      * CI
      
      * minor changes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * resolved conversations
      
      * minor changes
      
      * conversations
      
      [Unispeech] Fix slow tests (#15818)
      
      * remove soundfile old way of loading audio
      
      * Adapt slow test
      
      [Barthez Tokenizer] Fix saving (#15815)
      
      [TFXLNet] Correct tf xlnet generate (#15822)
      
      * [TFXLNet] Correct tf xlnet
      
      * adapt test comment
      
      Fix the push run (#15807)
      
      Fix semantic segmentation pipeline test (#15826)
      
      Fix dummy_inputs() to dummy_inputs in symbolic_trace doc (#15776)
      
      Add model specific output classes to PoolFormer model docs (#15746)
      
      * Added model specific output classes to poolformer docs
      
      * Fixed Segformer typo in Poolformer docs
      
      Adding the option to return_timestamps on pure CTC ASR models. (#15792)
      
      * Adding the option to return_timestamps on pure CTC ASR models.
      
      * Remove `math.prod` which was introduced in Python 3.8
      
      * int are not floats.
      
      * Reworking the PR to support "char" vs "word" output.
      
      * Fixup!
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Quality.
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      HFTracer.trace should use/return self.graph to be compatible with torch.fx.Tracer (#15824)
      
      Fix tf.concatenate + test past_key_values for TF models (#15774)
      
      * fix wrong method name tf.concatenate
      
      * add tests related to causal LM / decoder
      
      * make style and quality
      
      * clean-up
      
      * Fix TFBertModel's extended_attention_mask when past_key_values is provided
      
      * Fix tests
      
      * fix copies
      
      * More tf.int8 -> tf.int32 in TF test template
      
      * clean-up
      
      * Update TF test template
      
      * revert the previous commit + update the TF test template
      
      * Fix TF template extended_attention_mask when past_key_values is provided
      
      * Fix some styles manually
      
      * clean-up
      
      * Fix ValueError: too many values to unpack in the test
      
      * Fix more: too many values to unpack in the test
      
      * Add a comment for extended_attention_mask when there is past_key_values
      
      * Fix TFElectra extended_attention_mask when past_key_values is provided
      
      * Add tests to other TF models
      
      * Fix for TF Electra test: add prepare_config_and_inputs_for_decoder
      
      * Fix not passing training arg to lm_head in TFRobertaForCausalLM
      
      * Fix tests (with past) for TF Roberta
      
      * add testing for pask_key_values for TFElectra model
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      
      [examples/summarization and translation] fix readme (#15833)
      
      Add ONNX Runtime quantization for text classification notebook (#15817)
      
      Re-enable doctests for the quicktour (#15828)
      
      * Re-enable doctests for the quicktour
      
      * Re-enable doctests for task_summary (#15830)
      
      * Remove &
      
      Framework split model report (#15825)
      
      Add TFConvNextModel (#15750)
      
      * feat: initial implementation of convnext in tensorflow.
      
      * fix: sample code for the classification model.
      
      * chore: added checked for  from the classification model.
      
      * chore: set bias initializer in the classification head.
      
      * chore: updated license terms.
      
      * chore: removed ununsed imports
      
      * feat: enabled  argument during using drop_path.
      
      * chore: replaced tf.identity with layers.Activation(linear).
      
      * chore: edited default checkpoint.
      
      * fix: minor bugs in the initializations.
      
      * partial-fix: tf model errors for loading pretrained pt weights.
      
      * partial-fix: call method updated
      
      * partial-fix: cross loading of weights (4x3 variables to be matched)
      
      * chore: removed unneeded comment.
      
      * removed playground.py
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * fix: renaming TFConvNextStage conv and layer norm layers
      
      * chore: added initializers and other minor additions.
      
      * chore: added initializers and other minor additions.
      
      * add: tests for convnext.
      
      * fix: integration tester class.
      
      * fix: issues mentioned in pr feedback (round 1).
      
      * fix: how output_hidden_states arg is propoagated inside the network.
      
      * feat: handling of  arg for pure cnn models.
      
      * chore: added a note on equal contribution in model docs.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * feat: encapsulation for the convnext trunk.
      
      * Fix variable naming; Test-related corrections; Run make fixup
      
      * chore: added Joao as a contributor to convnext.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * chore: corrected copyright year and added comment on NHWC.
      
      * chore: fixed the black version and ran formatting.
      
      * chore: ran make style.
      
      * chore: removed from_pt argument from test, ran make style.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * fix: tests in the convnext subclass, ran make style.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * rebasing
      
      * rebasing and removing playground.py.
      
      * chore: moved convnext test to the correct location
      
      * fix: locations for the test file of convnext.
      
      * fix: convnext tests.
      
      * chore: applied  sgugger's suggestion for dealing w/ output_attentions.
      
      * chore: added comments.
      
      * chore: applied updated quality enviornment style.
      
      * chore: applied formatting with quality enviornment.
      
      * chore: revert to the previous tests/test_modeling_common.py.
      
      * chore: revert to the original test_modeling_common.py
      
      * chore: revert to previous states for test_modeling_tf_common.py and modeling_tf_utils.py
      
      * fix: tests for convnext.
      
      * chore: removed output_attentions argument from convnext config.
      
      * chore: revert to the earlier tf utils.
      
      * fix: output shapes of the hidden states
      
      * chore: removed unnecessary comment
      
      * chore: reverting to the right test_modeling_tf_common.py.
      
      * Styling nits
      Co-authored-by: default avatarariG23498 <aritra.born2fly@gmail.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      Co-authored-by: default avatarSylvain Gugger <Sylvain.gugger@gmail.com>
      
      * minor changes
      
      * doc fix in feature extractor
      
      * doc
      
      * typose
      
      * removed detr logic from config
      
      * removed detr logic from config
      
      * removed num_labels
      
      * small fix in the config
      
      * auxilary -> auxiliary
      
      * make style
      
      * some test is failing
      
      * fix a weird char in config prevending doc-builder
      
      * retry to fix the doc-builder issue
      
      * make style
      
      * new try to fix the doc builder
      
      * CI
      
      * change weights to facebook
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarariG23498 <aritra.born2fly@gmail.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      Co-authored-by: default avatarSylvain Gugger <Sylvain.gugger@gmail.com>
      d83d22f5
  17. 01 Mar, 2022 1 commit
  18. 28 Feb, 2022 1 commit
  19. 23 Feb, 2022 1 commit
  20. 18 Feb, 2022 1 commit
    • Gunjan Chhablani's avatar
      Add PLBart (#13269) · ae1f8350
      Gunjan Chhablani authored
      * Init PLBART
      
      * Add missing configuration file
      
      * Add conversion script and configurationf ile
      
      * Fix style
      
      * Update modeling and conversion scripts
      
      * Fix scale embedding in config
      
      * Add comment
      
      * Fix conversion script
      
      * Add classification option to conversion script
      
      * Fix vocab size in config doc
      
      * Add tokenizer files from MBart50
      
      * Allow no lang code in regular tokenizer
      
      * Add PLBart Tokenizer Converters
      
      * Remove mask from multi tokenizer
      
      * Remove mask from multi tokenizer
      
      * Change from MBart-50 to MBart tokenizer
      
      * Fix names and modify src/tgt behavior
      
      * Fix imports for tokenizer
      
      * Remove <mask> from multi tokenizer
      
      * Fix style
      
      * Change tokenizer_class to processor_class
      
      * Add attribute map to config class
      
      * Update modeling file to modified MBart code
      
      * Update configuration file to MBart style configuration
      
      * Fix tokenizer
      
      * Separate tokenizers
      
      * Fix error in tokenization auto
      
      * Copy MBart tests
      
      * Replace with MBart tokenization tests
      
      * Fix style
      
      * Fix language code in multi tokenizer
      
      * Fix configuration docs
      
      * Add entry for plbart_multi in transformers init
      
      * Add dummy objects and fix imports
      
      * Fix modeling tests
      
      * Add TODO in config
      
      * Fix copyright year
      
      * Fix modeling docs and test
      
      * Fix some tokenization tests and style
      
      * Add changes from review
      
      * Fix copies
      
      * Fix docs
      
      * Fix docs
      
      * Fix style
      
      * Fix year
      
      * Add changes from review
      
      * Remove extra changes
      
      * Fix base tokenizer and doc
      
      * Fix style
      
      * Fix modeling and slow tokenizer tests
      
      * Remove Multi-tokenizer Converter and Tests
      
      * Delete QA model and Multi Tokenizer dummy objects
      
      * Fix repo consistency and code quality issues
      
      * Fix example documentation
      
      * Fix style
      
      * Remove PLBartTokenizer from type checking in init
      
      * Fix consistency issue
      
      * Add changes from review
      
      * Fix style
      
      * Remove PLBartTokenizerFast
      
      * Remove FastTokenizer converter
      
      * Fix AutoTokenzier mapping
      
      * Add plbart to toctree and fix consistency issues
      
      * Add language codes tokenizer test
      
      * Fix styling and doc issues
      
      * Add fixes for failing tests
      
      * Fix copies
      
      * Fix failing modeling test
      
      * Change assert to assertTrue in modeling tests
      ae1f8350
  21. 04 Feb, 2022 1 commit
  22. 28 Jan, 2022 1 commit
    • Suraj Patil's avatar
      Add XGLM models (#14876) · d25e25ee
      Suraj Patil authored
      
      
      * add xglm
      
      * update vocab size
      
      * fix model name
      
      * style and tokenizer
      
      * typo
      
      * no mask token
      
      * fix pos embed compute
      
      * fix args
      
      * fix tokenizer
      
      * fix positions
      
      * fix tokenization
      
      * style and dic fixes
      
      * fix imports
      
      * add fast tokenizer
      
      * update names
      
      * add pt tests
      
      * fix tokenizer
      
      * fix typo
      
      * fix tokenizer import
      
      * fix fast tokenizer
      
      * fix tokenizer
      
      * fix converter
      
      * add tokenizer test
      
      * update checkpoint names
      
      * fix tokenizer tests
      
      * fix slow tests
      
      * add copied from comments
      
      * rst -> mdx
      
      * flax model
      
      * update flax tests
      
      * quality
      
      * style
      
      * doc
      
      * update index and readme
      
      * fix copies
      
      * fix doc
      
      * update toctrr
      
      * fix indent
      
      * minor fixes
      
      * fix config doc
      
      * don't save embed_pos weights
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * address Sylvains commnets, few doc fixes
      
      * fix check_repo
      
      * align order of arguments
      
      * fix copies
      
      * fix labels
      
      * remove unnecessary mapping
      
      * fix saving tokenizer
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d25e25ee
  23. 19 Jan, 2022 1 commit
    • NielsRogge's avatar
      Add ViLT (#14895) · ac227093
      NielsRogge authored
      
      
      * First commit
      
      * Add conversion script
      
      * Make conversion script work for base model
      
      * More improvements
      
      * Update conversion script, works for vqa
      
      * Add indexing argument to meshgrid
      
      * Make conversion script work for ViltForPreTraining
      
      * Add ViltForPreTraining to docs
      
      * Fix device issue
      
      * Add processor
      
      * Add MinMaxResize to feature extractor
      
      * Implement call method of ViltProcessor
      
      * Fix tests
      
      * Add integration test
      
      * Add loss calculation for VQA
      
      * Improve tests
      
      * Improve some more tests
      
      * Debug tests
      
      * Small improvements
      
      * Add support for attention_mask
      
      * Remove mask_it
      
      * Add pixel_mask
      
      * Add tests for ViltFeatureExtractor
      
      * Improve tests
      
      * Add ViltForNaturalLanguageVisualReasoning
      
      * Add ViltForNaturalLanguageVisualReasoning to conversion script
      
      * Minor fixes
      
      * Add support for image_embeds, update docstrings to markdown
      
      * Update docs to markdown
      
      * Improve conversion script
      
      * Rename ViltForPreTraining to ViltForMaskedLM
      
      * Improve conversion script
      
      * Convert docstrings to markdown
      
      * Fix code example of retrieval model
      
      * Properly convert masked language model
      
      * Add integration test for nlvr
      
      * Fix code quality
      
      * Apply suggestions from code review
      
      * Add copied from statements
      
      * Fix pretrained_config_archive_map
      
      * Fix docs
      
      * Add model to README
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply more suggestions from code review
      
      * Make code more readable
      
      * Add ViltForNaturalLanguageVisualReasoning to the tests
      
      * Rename ViltForVisualQuestionAnswering to ViltForQuestionAnswering
      
      * Replace pixel_values_2 by single tensor
      
      * Add hidden_states and attentions
      
      * Fix one more test
      
      * Fix all tests
      
      * Update year
      
      * Fix rebase issues
      
      * Fix another rebase issue
      
      * Remove ViltForPreTraining from auto mapping
      
      * Rename ViltForImageRetrievalTextRetrieval to ViltForImageAndTextRetrieval
      
      * Make it possible to use BertTokenizerFast in the processor
      
      * Use BertTokenizerFast by default
      
      * Rename ViltForNaturalLanguageVisualReasoning, define custom model output
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ac227093
  24. 18 Jan, 2022 1 commit
    • Li-Huai (Allan) Lin's avatar
      Add REALM (#13292) · 22454ae4
      Li-Huai (Allan) Lin authored
      
      
      * REALM initial commit
      
      * Retriever OK (Update new_gelu).
      
      * Encoder prediction score OK
      
      * Encoder pretrained model OK
      
      * Update retriever comments
      
      * Update docs, tests, and imports
      
      * Prune unused models
      
      * Make embedder as a module `RealmEmbedder`
      
      * Add RealmRetrieverOutput
      
      * Update tokenization
      
      * Pass all tests in test_modeling_realm.py
      
      * Prune RealmModel
      
      * Update docs
      
      * Add training test.
      
      * Remove completed TODO
      
      * Style & Quality
      
      * Prune `RealmModel`
      
      * Fixup
      
      * Changes:
      1. Remove RealmTokenizerFast
      2. Update docstrings
      3. Add a method to RealmTokenizer to handle candidates tokenization.
      
      * Fix up
      
      * Style
      
      * Add tokenization tests
      
      * Update `from_pretrained` tests
      
      * Apply suggestions
      
      * Style & Quality
      
      * Copy BERT model
      
      * Fix comment to avoid docstring copying
      
      * Make RealmBertModel private
      
      * Fix bug
      
      * Style
      
      * Basic QA
      
      * Save
      
      * Complete reader logits
      
      * Add searcher
      
      * Complete searcher & reader
      
      * Move block records init to constructor
      
      * Fix training bug
      
      * Add some outputs to RealmReader
      
      * Add finetuned checkpoint variable names parsing
      
      * Fix bug
      
      * Update REALM config
      
      * Add RealmForOpenQA
      
      * Update convert_tfrecord logits
      
      * Fix bugs
      
      * Complete imports
      
      * Update docs
      
      * Update naming
      
      * Add brute-force searcher
      
      * Pass realm model tests
      
      * Style
      
      * Exclude RealmReader from common tests
      
      * Fix
      
      * Fix
      
      * convert docs
      
      * up
      
      * up
      
      * more make style
      
      * up
      
      * upload
      
      * up
      
      * Fix
      
      * Update src/transformers/__init__.py
      
      * adapt testing
      
      * change modeling code
      
      * fix test
      
      * up
      
      * up
      
      * up
      
      * correct more
      
      * make retriever work
      
      * update
      
      * make style
      
      * finish main structure
      
      * Resolve merge conflict
      
      * Make everything work
      
      * Style
      
      * Fixup
      
      * Fixup
      
      * Update training test
      
      * fix retriever
      
      * remove hardcoded path
      
      * Fix
      
      * Fix modeling test
      
      * Update model links
      
      * Initial retrieval test
      
      * Fix modeling test
      
      * Complete retrieval tests
      
      * Fix
      
      * style
      
      * Fix tests
      
      * Fix docstring example
      
      * Minor fix of retrieval test
      
      * Update license headers and docs
      
      * Apply suggestions from code review
      
      * Style
      
      * Apply suggestions from code review
      
      * Add an example to RealmEmbedder
      
      * Fix
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      22454ae4
  25. 14 Jan, 2022 1 commit
  26. 10 Jan, 2022 2 commits
  27. 03 Jan, 2022 1 commit
  28. 23 Dec, 2021 1 commit
    • Yih-Dar's avatar
      Add TFCLIPModel (#13967) · 8f2cc1c3
      Yih-Dar authored
      
      
      * Start the work for TFCLIPModel
      
      * Convert to TF code (TODO: loss + doc)
      
      * Clean up
      
      * Fix pooled_output for TFCLIPTextTransformer - using tf.gather_nd
      
      * assert -> raise error
      
      * Expose TFCLIPModel
      
      * Deal with dummy_inputs
      
      * Add tests
      
      * Fix all tests. TODO: manual check weight loading + add more comments
      
      * Fix pt tf equivalence test
      
      * fixes
      
      * update TFCLIPVisionEmbeddings's Conv2D
      
      * Fix loss + overwrite test_pt_tf_model_equivalence from common
      
      * Add a comment about the change about MainLayer in test_keras_save_load
      
      * Set return_loss=True in TFCLIPModelTester + make tests pass
      
      * overwrite test_pt_tf_model_equivalence from tf common
      
      * fix base_model_prefix
      
      * Fix examples
      
      * remove unused
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * apply review suggestions
      
      * change self.pre_layrnorm to self.pre_layernorm
      
      * apply more review suggestions
      
      * return attention probs before dropout (to align with PT)
      
      * fix weight init
      
      * fix
      
      * build doc
      
      * fix missing doc
      
      * fix for test
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      8f2cc1c3
  29. 22 Dec, 2021 1 commit
  30. 21 Dec, 2021 1 commit
    • Sylvain Gugger's avatar
      Mass conversion of documentation from rst to Markdown (#14866) · 27b3031d
      Sylvain Gugger authored
      * Convert docstrings of all configurations and tokenizers
      
      * Processors and fixes
      
      * Last modeling files and fixes to models
      
      * Pipeline modules
      
      * Utils files
      
      * Data submodule
      
      * All the other files
      
      * Style
      
      * Missing examples
      
      * Style again
      
      * Fix copies
      
      * Say bye bye to rst docstrings forever
      27b3031d
  31. 13 Dec, 2021 1 commit
  32. 08 Dec, 2021 1 commit
    • NielsRogge's avatar
      Add Perceiver IO (#14487) · 65b20b73
      NielsRogge authored
      * First draft
      
      * Style and remove mlm
      
      * Make forward pass work
      
      * More improvements
      
      * More improvements
      
      * Fix bug
      
      * More improvements
      
      * More improvements
      
      * Add PerceiverTokenizer first draft
      
      * Improve conversion script
      
      * More improvements
      
      * Make conversion script work for the encoder
      
      * Make conversion script work with local pickle files
      
      * Style & quality, fix-copies
      
      * Add dummy input to conversion script
      
      * Add absolute position embeddings to TextPreProcessor
      
      * Make forward pass of encoder work
      
      * More improvements
      
      * Move text preprocessor to separate script
      
      * More improvements
      
      * More improvements
      
      * Add post processor
      
      * Make MLM model work
      
      * Style
      
      * Add PerceiverForMaskedLM
      
      * Add PerceiverImagePreprocessor
      
      * Make style
      
      * Make PerceiverForImageClassification work
      
      * More improvements
      
      * More improvements
      
      * Use tokenizer in conversion script
      
      * Use PerceiverForMaskedLM in conversion script
      
      * Define custom PerceiverModelOutput
      
      * Improve PerceiverAttention to make it work for both MLM and image classification
      
      * More improvements
      
      * More improvements
      
      * More improvements to the conversion script
      
      * Make conversion script work for both MLM and image classification
      
      * Add PerceiverFeatureExtractor
      
      * More improvements
      
      * Style and quality
      
      * Add center cropping
      
      * Fix bug
      
      * Small fix
      
      * Add print statement
      
      * Fix bug in image preprocessor
      
      * Fix bug with conversion script
      
      * Make output position embeddings an nn.Parameter layer instead of nn.Embedding
      
      * Comment out print statements
      
      * Add position encoding classes
      
      * More improvements
      
      * Use position_encoding_kwargs
      
      * Add PerceiverForImageClassificationFourier
      
      * Make style & quality
      
      * Add PerceiverForImageClassificationConvProcessing
      
      * Style & quality
      
      * Add flow model
      
      * Move processors to modeling file
      
      * Make position encodings modular
      
      * Make basic decoder use modular position encodings
      
      * Add PerceiverForOpticalFlow to conversion script
      
      * Add AudioPreprocessor
      
      * Make it possible for the basic decoder to use Fourier position embeddings
      
      * Add PerceiverForMultimodalAutoencoding
      
      * Improve model for optical flow
      
      * Improve _build_network_inputs method
      
      * Add print statement
      
      * Fix device issue
      
      * Fix device of Fourier embeddings
      
      * Add print statements for debugging
      
      * Add another print statement
      
      * Add another print statement
      
      * Add another print statement
      
      * Add another print statement
      
      * Improve PerceiverAudioPreprocessor
      
      * Improve conversion script for multimodal modal
      
      * More improvements
      
      * More improvements
      
      * Improve multimodal model
      
      * Make forward pass multimodal model work
      
      * More improvements
      
      * Improve tests
      
      * Fix some more tests
      
      * Add output dataclasses
      
      * Make more tests pass
      
      * Add print statements for debuggin
      
      * Add tests for image classification
      
      * Add PerceiverClassifierOutput
      
      * More improvements
      
      * Make more tests pass for the optical flow model
      
      * Make style & quality
      
      * Small improvements
      
      * Don't support training for optical flow model for now
      
      * Fix _prepare_for_class for tests
      
      * Make more tests pass, add some docs
      
      * Add multimodal model to tests
      
      * Minor fixes
      
      * Fix tests
      
      * Improve conversion script
      
      * Make fixup
      
      * Remove pos_dim argument
      
      * Fix device issue
      
      * Potential fix for OOM
      
      * Revert previous commit
      
      * Fix test_initialization
      
      * Add print statements for debugging
      
      * Fix print statement
      
      * Add print statement
      
      * Add print statement
      
      * Add print statement
      
      * Add print statement
      
      * Add print statement
      
      * Add print statement
      
      * Remove need for output_shape
      
      * Comment out output_shape
      
      * Remove unnecessary code
      
      * Improve docs
      
      * Fix make fixup
      
      * Remove PerceiverTextProcessor from init
      
      * Improve docs
      
      * Small improvement
      
      * Apply first batch of suggestions from code review
      
      * Apply more suggestions from code review
      
      * Update docstrings
      
      * Define dicts beforehand for readability
      
      * Rename task to architecture in conversion script, include PerceiverModel in tests
      
      * Add print statements for debugging
      
      * Fix tests on GPU
      
      * Remove preprocessors, postprocessors and decoders from main init
      
      * Add integration test
      
      * Fix docs
      
      * Replace einops by torch
      
      * Update for new docs frontend
      
      * Rename PerceiverForImageClassification
      
      * Improve docs
      
      * Improve docs
      
      * Improve docs of PerceiverModel
      
      * Fix some more tests
      
      * Improve center_crop
      
      * Add PerceiverForSequenceClassification
      
      * Small improvements
      
      * Fix tests
      
      * Add integration test for optical flow model
      
      * Clean up
      
      * Add tests for tokenizer
      
      * Fix tokenizer by adding special tokens properly
      
      * Fix CI
      65b20b73
  33. 07 Dec, 2021 1 commit
    • Ryokan RI's avatar
      Add mLUKE (#14640) · 30646a0a
      Ryokan RI authored
      * implement MLukeTokenizer and LukeForMaskedLM
      
      * update tests
      
      * update docs
      
      * add LukeForMaskedLM to check_repo.py
      
      * update README
      
      * fix test and specify the entity pad id in tokenization_(m)luke
      
      * fix EntityPredictionHeadTransform
      30646a0a
  34. 01 Dec, 2021 1 commit
    • Sylvain Gugger's avatar
      Doc new front (#14590) · 4df7d05a
      Sylvain Gugger authored
      
      
      * Convert PretrainedConfig doc to Markdown
      
      * Use syntax
      
      * Add necessary doc files (#14496)
      
      * Doc fixes (#14499)
      
      * Fixes for the new front
      
      * Convert DETR file for table
      
      * Title is needed
      
      * Simplify a bit
      
      * Even simpler
      
      * Remove imports
      
      * Fix typo in toctree (#14516)
      
      * Fix checkpoints badge
      
      * Update versions.yml format (#14517)
      
      * Doc new front github actions (#14512)
      
      * Doc new front github actions
      
      * Fix docstring
      
      * Fix feature extraction utils import (#14515)
      
      * Address Julien's comments
      
      * Push to doc-builder
      
      * Ready for merge
      
      * Remove old build and deploy
      
      * Doc misc fixes (#14583)
      
      * Rm versions.yml from doc
      
      * Fix converting.rst
      
      * Rm pretrained_models from toctree
      
      * Fix index links (#14567)
      
      * Fix links in README
      
      * Localized READMEs
      
      * Fix copy script
      
      * Fix find doc script
      
      * Update README_ko.md
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      
      * Adapt build command to new CLI tools (#14578)
      
      * Fix typo
      
      * Fix doc interlinks (#14589)
      
      * Convert PretrainedConfig doc to Markdown
      
      * Use syntax
      
      * Rm pattern <[a-z]+(.html).*>
      
      * Rm huggingface.co/transformers/master
      
      * Rm .html
      
      * Rm .html from index.mdx
      
      * Rm .html from model_summary.rst
      
      * Update index.mdx rm html
      
      * Update remove .html
      
      * Fix inner doc links
      
      * Fix interlink in preprocssing.rst
      
      * Update pr_checks
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Convert PretrainedConfig doc to Markdown
      
      * Use syntax
      
      * Add necessary doc files (#14496)
      
      * Doc fixes (#14499)
      
      * Fixes for the new front
      
      * Convert DETR file for table
      
      * Title is needed
      
      * Simplify a bit
      
      * Even simpler
      
      * Remove imports
      
      * Fix checkpoints badge
      
      * Fix typo in toctree (#14516)
      
      * Update versions.yml format (#14517)
      
      * Doc new front github actions (#14512)
      
      * Doc new front github actions
      
      * Fix docstring
      
      * Fix feature extraction utils import (#14515)
      
      * Address Julien's comments
      
      * Push to doc-builder
      
      * Ready for merge
      
      * Remove old build and deploy
      
      * Doc misc fixes (#14583)
      
      * Rm versions.yml from doc
      
      * Fix converting.rst
      
      * Rm pretrained_models from toctree
      
      * Fix index links (#14567)
      
      * Fix links in README
      
      * Localized READMEs
      
      * Fix copy script
      
      * Fix find doc script
      
      * Update README_ko.md
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      
      * Adapt build command to new CLI tools (#14578)
      
      * Fix typo
      
      * Fix doc interlinks (#14589)
      
      * Convert PretrainedConfig doc to Markdown
      
      * Use syntax
      
      * Rm pattern <[a-z]+(.html).*>
      
      * Rm huggingface.co/transformers/master
      
      * Rm .html
      
      * Rm .html from index.mdx
      
      * Rm .html from model_summary.rst
      
      * Update index.mdx rm html
      
      * Update remove .html
      
      * Fix inner doc links
      
      * Fix interlink in preprocssing.rst
      
      * Update pr_checks
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Styling
      Co-authored-by: default avatarMishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      4df7d05a
  35. 30 Nov, 2021 1 commit
    • Suraj Patil's avatar
      VisionTextDualEncoder (#13511) · fc1d97f2
      Suraj Patil authored
      
      
      * init vision_text_dual_encoder
      
      * fix merge
      
      * remove extra heads
      
      * fix tests
      
      * remove VISION_TEXT_DUAL_ENCODER_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * remove archive map
      
      * fix imports
      
      * fix more imports
      
      * fix init
      
      * delete tokenizers
      
      * fix imports
      
      * clean
      
      * support clip's vision model
      
      * handle None config
      
      * begin tests
      
      * more test and few fixes
      
      * warn about newly init weights
      
      * more tests
      
      * add loss to model
      
      * remove extra classes from doc
      
      * add processor
      
      * doc and small fixes
      
      * add start docstr
      
      * update flax model
      
      * flax tests
      
      * more flax tests
      
      * doc
      
      * quality
      
      * doc and quality
      
      * fix doc
      
      * doc
      
      * remove comments
      
      * update warning
      
      * quality
      
      * fix docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * replace asserts, fix imports
      
      * update imports
      
      * fix import
      
      * address some review comments
      
      * fix check
      
      * reduce tolerance
      
      * fix test
      
      * add flax integration test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * address Sylvain's comments
      
      * fix style
      
      * add pt_flax_equivalence test in PT tests
      
      * add pt integration test
      
      * update test
      
      * use pre-trained checkpoint in examples
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      fc1d97f2