1. 05 Oct, 2020 13 commits
    • Sylvain Gugger's avatar
      Add new dummy PT objects · 3bd3d8b5
      Sylvain Gugger authored
      3bd3d8b5
    • Sylvain Gugger's avatar
      Allow soft dependencies in the namespace with ImportErrors at use (#7537) · 28d183c9
      Sylvain Gugger authored
      * PoC on RAG
      
      * Format class name/obj name
      
      * Better name in message
      
      * PoC on one TF model
      
      * Add PyTorch and TF dummy objects + script
      
      * Treat scikit-learn
      
      * Bad copy pastes
      
      * Typo
      28d183c9
    • Joshua H's avatar
      Update Code example according to deprecation of AutoModeWithLMHead (#7555) · 1a00f46c
      Joshua H authored
      'The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.'
      I dont know how to change the 'How to use this model directly from the 馃/transformers library:' part since it is not part of the model-paper
      1a00f46c
    • Amine Abdaoui's avatar
      docs(pretrained_models): fix num parameters (#7575) · 0d79de73
      Amine Abdaoui authored
      
      
      * docs(pretrained_models): fix num parameters
      
      * fix(pretrained_models): correct typo
      Co-authored-by: default avatarAmin <amin.geotrend@gmail.com>
      0d79de73
    • Malte Pietsch's avatar
      Fix tokenization in SQuAD for RoBERTa, Longformer, BART (#7387) · ba5ea66e
      Malte Pietsch authored
      * fix squad tokenization for roberta & co
      
      * change to pure type based check
      
      * sort imports
      ba5ea66e
    • Sylvain Gugger's avatar
      0270256b
    • Cola's avatar
      Add `power` argument for TF PolynomialDecay (#5732) · 60de910e
      Cola authored
      * 馃毄 Add `power` argument for TF PolynomialDecay
      
      * 馃毄 Create default optimizer with power
      
      * 馃毄 Add argument to training args
      
      * 馃毃 Clean code format
      
      * 馃毃 Fix black warning
      
      * 馃毃 Fix code format
      60de910e
    • Lysandre Debut's avatar
      Add Electra unexpected keys (#7569) · 41c3a3b9
      Lysandre Debut authored
      41c3a3b9
    • Nathan Cooper's avatar
      [Model card] Java Code Summarizer model (#7568) · 071970fe
      Nathan Cooper authored
      
      
      * Create README.md
      
      * Update model_cards/ncoop57/bart-base-code-summarizer-java-v0/README.md
      Co-authored-by: default avatarJulien Chaumond <chaumond@gmail.com>
      071970fe
    • Forrest Iandola's avatar
      SqueezeBERT architecture (#7083) · 02ef825b
      Forrest Iandola authored
      * configuration_squeezebert.py
      
      thin wrapper around bert tokenizer
      
      fix typos
      
      wip sb model code
      
      wip modeling_squeezebert.py. Next step is to get the multi-layer-output interface working
      
      set up squeezebert to use BertModelOutput when returning results.
      
      squeezebert documentation
      
      formatting
      
      allow head mask that is an array of [None, ..., None]
      
      docs
      
      docs cont'd
      
      path to vocab
      
      docs and pointers to cloud files (WIP)
      
      line length and indentation
      
      squeezebert model cards
      
      formatting of model cards
      
      untrack modeling_squeezebert_scratchpad.py
      
      update aws paths to vocab and config files
      
      get rid of stub of NSP code, and advise users to pretrain with mlm only
      
      fix rebase issues
      
      redo rebase of modeling_auto.py
      
      fix issues with code formatting
      
      more code format auto-fixes
      
      move squeezebert before bert in tokenization_auto.py and modeling_auto.py because squeezebert inherits from bert
      
      tests for squeezebert modeling and tokenization
      
      fix typo
      
      move squeezebert before bert in modeling_auto.py to fix inheritance problem
      
      disable test_head_masking, since squeezebert doesn't yet implement head masking
      
      fix issues exposed by the test_modeling_squeezebert.py
      
      fix an issue exposed by test_tokenization_squeezebert.py
      
      fix issue exposed by test_modeling_squeezebert.py
      
      auto generated code style improvement
      
      issue that we inherited from modeling_xxx.py: SqueezeBertForMaskedLM.forward() calls self.cls(), but there is no self.cls, and I think the goal was actually to call self.lm_head()
      
      update copyright
      
      resolve failing 'test_hidden_states_output' and remove unused encoder_hidden_states and encoder_attention_mask
      
      docs
      
      add integration test. rename squeezebert-mnli --> squeezebert/squeezebert-mnli
      
      autogenerated formatting tweaks
      
      integrate feedback from patrickvonplaten and sgugger to programming style and documentation strings
      
      * tiny change to order of imports
      02ef825b
    • Sylvain Gugger's avatar
      Cleanup documentation for BART, Marian, MBART and Pegasus (#7523) · e2c935f5
      Sylvain Gugger authored
      * Cleanup documentation for BART, Marian, MBART and Pegasus
      
      * Cleanup documentation for BART, Marian, MBART and Pegasus
      e2c935f5
    • Alexandr's avatar
      LayoutLM: add exception handling for bbox values (#7452) · 5e941bec
      Alexandr authored
      
      
      * LayoutLM: add exception handling for bbox values
      
      To replicate unhandled error:
      
      - In `test_modelling_layoutlm.py` set `range_bbox=1025`, i.e. greater 1024
      - Run `pytest tests/test_modeling_layoutlm.py`
      
      Requirement for bbox values to be within the range 0-1000 is documented
      but if it is violated then it isa not clear what is the issue from error
      message.
      
      * Update src/transformers/modeling_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      5e941bec
    • Dhaval Taunk's avatar
  2. 04 Oct, 2020 2 commits
  3. 02 Oct, 2020 1 commit
  4. 01 Oct, 2020 24 commits