1. 05 Apr, 2021 2 commits
  2. 01 Apr, 2021 2 commits
  3. 31 Mar, 2021 4 commits
  4. 30 Mar, 2021 6 commits
    • Suraj Patil's avatar
      GPT Neo few fixes (#10968) · 83d38c9f
      Suraj Patil authored
      * fix checkpoint names
      
      * auto model
      
      * fix doc
      83d38c9f
    • Patrick von Platen's avatar
      fix big bird gpu test (#10967) · 7772ddb4
      Patrick von Platen authored
      7772ddb4
    • Suraj Patil's avatar
      GPT Neo (#10848) · 86026437
      Suraj Patil authored
      
      
      * lets begin
      
      * boom boom
      
      * fix out proj in attn
      
      * fix attention
      
      * fix local attention
      
      * add tokenizer
      
      * fix imports
      
      * autotokenizer
      
      * fix checkpoint name
      
      * cleanup
      
      * more clean-up
      
      * more cleanup
      
      * output attentions
      
      * fix attn mask creation
      
      * fix imports
      
      * config doc
      
      * add tests
      
      * add slow tests
      
      * quality
      
      * add conversion script
      
      * copyright
      
      * typo
      
      * another bites the dust
      
      * fix attention tests
      
      * doc
      
      * add embed init in convert function
      
      * fix copies
      
      * remove tokenizer
      
      * enable caching
      
      * address review comments
      
      * improve config and create attn layer list internally
      
      * more consistent naming
      
      * init hf config from mesh-tf config json file
      
      * remove neo tokenizer from doc
      
      * handle attention_mask in local attn layer
      
      * attn_layers => attention_layers
      
      * add tokenizer_class in config
      
      * fix docstring
      
      * raise if len of attention_layers is not same as num_layers
      
      * remove tokenizer_class from config
      
      * more consistent naming
      
      * fix doc
      
      * fix checkpoint names
      
      * fp16 compat
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      86026437
    • Patrick von Platen's avatar
      [WIP][Flax] Add general conversion script (#10809) · 8780caa3
      Patrick von Platen authored
      
      
      * save intermediate
      
      * finish first version
      
      * delete some more
      
      * improve import
      
      * fix roberta
      
      * Update src/transformers/modeling_flax_pytorch_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/modeling_flax_pytorch_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * small corrections
      
      * apply all comments
      
      * fix deterministic
      
      * make fix-copies
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      8780caa3
    • Philipp Schmid's avatar
      Sagemaker test (#10925) · 604c0850
      Philipp Schmid authored
      * init
      
      * first working test
      
      * added todo for setup.py
      
      * working test for single node multi node ddp and smd
      
      * added tensorflow single node test
      
      * added directory for pytorch and tensorflow due to different requirements.txt
      
      * added directory for pytorch and tensorflow
      
      * added comment for run_glue until it is available
      
      * added output_dir to it
      
      * smaller dataset to make test running faster
      
      * adjust HP and script
      
      * adjusted parameter for tensorflow
      
      * refactored test scripts
      
      * adjusted make file
      
      * init
      
      * first working test
      
      * added todo for setup.py
      
      * working test for single node multi node ddp and smd
      
      * added tensorflow single node test
      
      * added directory for pytorch and tensorflow due to different requirements.txt
      
      * added directory for pytorch and tensorflow
      
      * added comment for run_glue until it is available
      
      * added output_dir to it
      
      * smaller dataset to make test running faster
      
      * adjust HP and script
      
      * adjusted parameter for tensorflow
      
      * refactored test scripts
      
      * adjusted make file
      
      * updated dlc container
      
      * commented in all tests
      
      * added both ecr images
      
      * added new master branches
      
      * debug
      
      * added new datasets version
      
      * init
      
      * strange rebase bug
      
      * removed changes
      
      * changed min version for tests to work
      
      * updated DLC
      
      * added model parallel test
      
      * removed test files
      
      * removed test files
      
      * tested with ned dlc
      
      * added correct sagemaker sdk version
      
      * adjust DLCs for official one
      
      * reworked tests
      
      * quality
      
      * removed default profile added documentation to it
      
      * added step in release for sagemaker tests
      
      * reverted version for example script removed duplicated script and added install from master to requirements.txt
      
      * removed mistaken .DS_Stores from mac
      
      * fixed tests
      
      * added Sylvains feedback
      
      * make style
      
      * added lysandre's feedback
      604c0850
    • Vasudev Gupta's avatar
      BigBird (#10183) · 6dfd0272
      Vasudev Gupta authored
      
      
      * init bigbird
      
      * model.__init__ working, conversion script ready, config updated
      
      * add conversion script
      
      * BigBirdEmbeddings working :)
      
      * slightly update conversion script
      
      * BigBirdAttention working :) ; some bug in layer.output.dense
      
      * add debugger-notebook
      
      * forward() working for BigBirdModel :) ; replaced gelu with gelu_fast
      
      * tf code adapted to torch till rand_attn in bigbird_block_sparse_attention ; till now everything working :)
      
      * BigBirdModel working in block-sparse attention mode :)
      
      * add BigBirdForPreTraining
      
      * small fix
      
      * add tokenizer for BigBirdModel
      
      * fix config & hence modeling
      
      * fix base prefix
      
      * init testing
      
      * init tokenizer test
      
      * pos_embed must be absolute, attn_type=original_full when add_cross_attn=True , nsp loss is optional in BigBirdForPreTraining, add assert statements
      
      * remove position_embedding_type arg
      
      * complete normal tests
      
      * add comments to block sparse attention
      
      * add attn_probs for sliding & global tokens
      
      * create fn for block sparse attn mask creation
      
      * add special tests
      
      * restore pos embed arg
      
      * minor fix
      
      * attn probs update
      
      * make big bird fully gpu friendly
      
      * fix tests
      
      * remove pruning
      
      * correct tokenzier & minor fixes
      
      * update conversion script , remove norm_type
      
      * tokenizer-inference test add
      
      * remove extra comments
      
      * add docs
      
      * save intermediate
      
      * finish trivia_qa conversion
      
      * small update to forward
      
      * correct qa and layer
      
      * better error message
      
      * BigBird QA ready
      
      * fix rebased
      
      * add triva-qa debugger notebook
      
      * qa setup
      
      * fixed till embeddings
      
      * some issue in q/k/v_layer
      
      * fix bug in conversion-script
      
      * fixed till self-attn
      
      * qa fixed except layer norm
      
      * add qa end2end test
      
      * fix gradient ckpting ; other qa test
      
      * speed-up big bird a bit
      
      * hub_id=google
      
      * clean up
      
      * make quality
      
      * speed up einsum with bmm
      
      * finish perf improvements for big bird
      
      * remove wav2vec2 tok
      
      * fix tokenizer
      
      * include docs
      
      * correct docs
      
      * add helper to auto pad block size
      
      * make style
      
      * remove fast tokenizer for now
      
      * fix some
      
      * add pad test
      
      * finish
      
      * fix some bugs
      
      * fix another bug
      
      * fix buffer tokens
      
      * fix comment and merge from master
      
      * add comments
      
      * make style
      
      * commit some suggestions
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Fix typos
      
      * fix some more suggestions
      
      * add another patch
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix copies
      
      * another path
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * update
      
      * update nit suggestions
      
      * make style
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      6dfd0272
  5. 26 Mar, 2021 1 commit
  6. 25 Mar, 2021 1 commit
    • Amir Tahmasbi's avatar
      Layout lm tf 2 (#10636) · 4684bfc7
      Amir Tahmasbi authored
      
      
      * Added embeddings layer
      
      * Added layoutlm layers, main model, maskedlm and token classification classes
      
      * Added model classes to tf auto models
      
      * Added model to PT to TF conversion script
      
      * Added model to doc README
      
      * Added tests
      
      * Removed unused imports
      
      * Added layoutlm model, test, and doc for sequence classification, and fix imports in __init__.py
      
      * Made tests pass!
      
      * Fixed typos in imports and docs
      
      * Fixed a typo in embeddings layer
      
      * Removed imports
      
      * Fixed formatting issues, imports, tests
      
      * Added layoutlm layers, main model, maskedlm and token classification classes
      
      * Added model classes to tf auto models
      
      * Added model to PT to TF conversion script
      
      * Removed unused imports
      
      * Added layoutlm model, test, and doc for sequence classification, and fix imports in __init__.py
      
      * Made tests pass!
      
      * Fixed typos in imports and docs
      
      * Removed imports
      
      * Fixed small formatting issues
      
      * Removed duplicates import from main __init__.py
      
      * Chnaged deafult arg to true for adding  pooling layer to tf layoutlm
      
      * Fixed formatting issues
      
      * Style
      
      * Added copied from to classes copied from bert
      
      * Fixed doc strings examples to work with layoutlm inputs
      
      * Removed PyTorch reference in doc strings example
      
      * Added integration tests
      
      * Cleaned up initialization file
      
      * Updated model checkpoint identifiers
      
      * Fixed imports
      Co-authored-by: default avatarAmir Tahmasbi <amir@ehsai.ca>
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      4684bfc7
  7. 23 Mar, 2021 1 commit
  8. 22 Mar, 2021 1 commit
  9. 19 Mar, 2021 1 commit
  10. 18 Mar, 2021 3 commits
    • Sylvain Gugger's avatar
      Fix distributed evaluation (#10795) · 008672e6
      Sylvain Gugger authored
      * Fix distributed evaluation
      
      * Use logger
      008672e6
    • Vimarsh Chaturvedi's avatar
      from_pretrained: check that the pretrained model is for the right model architecture (#10586) · 094afa51
      Vimarsh Chaturvedi authored
      
      
      * Added check to ensure model name passed to from_pretrained and model are the same
      
      * Added test to check from_pretrained throws assert error when passed an incompatiable model name
      
      * Modified assert in from_pretrained with f-strings. Modified test to ensure desired assert message is being generated
      
      * Added check to ensure config and model has model_type
      
      * Fix FlauBERT heads
      
      Co-authored-by: vimarsh chaturvedi <vimarsh chaturvedi>
      Co-authored-by: default avatarStas Bekman <stas@stason.org>
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      094afa51
    • Patrick von Platen's avatar
      [Flax] Adapt Flax models to new structure (#9484) · 0b98ca36
      Patrick von Platen authored
      
      
      * Create modeling_flax_eletra with code copied from modeling_flax_bert
      
      * Add ElectraForMaskedLM and ElectraForPretraining
      
      * Add modeling test for Flax electra and fix naming and arg in Flax Electra model
      
      * Add documentation
      
      * Fix code style
      
      * Create modeling_flax_eletra with code copied from modeling_flax_bert
      
      * Add ElectraForMaskedLM and ElectraForPretraining
      
      * Add modeling test for Flax electra and fix naming and arg in Flax Electra model
      
      * Add documentation
      
      * Fix code style
      
      * Fix code quality
      
      * Adjust tol in assert_almost_equal due to very small difference between model output, ranging 0.0010 - 0.0016
      
      * Remove redundant ElectraPooler
      
      * save intermediate
      
      * adapt
      
      * correct bert flax design
      
      * adapt roberta as well
      
      * finish roberta flax
      
      * finish
      
      * apply suggestions
      
      * apply suggestions
      Co-authored-by: default avatarChris Nguyen <anhtu2687@gmail.com>
      0b98ca36
  11. 17 Mar, 2021 6 commits
    • Mansi Mane's avatar
      Smmp batch not divisible by microbatches fix (#10778) · 0282e24e
      Mansi Mane authored
      
      
      * Added debug prints
      
      * Added config
      
      * Added prints
      
      * Added prints
      
      * Added extra samples to SequentialDistributedSampler
      
      * Added extra samples to SequentialDistributedSampler
      
      Updated SequentialDistributedSampler call
      
      * Added deubg prints
      
      * Removed extra prints
      
      * Making predicitons and labels multiple of batchsize
      
      * updated number of microbatches
      
      * Removed extra prints
      
      * Made start_remainder similar to DistributedSamplerWithLoop
      
      * Minor spacing update
      
      * Added debug prints
      
      Added config
      
      Added prints
      
      Added prints
      
      * Added extra samples to SequentialDistributedSampler
      
      Updated SequentialDistributedSampler call
      
      Added extra samples to SequentialDistributedSampler
      
      Added deubg prints
      
      Removed extra prints
      
      Making predicitons and labels multiple of batchsize
      
      updated number of microbatches
      
      Removed extra prints
      
      Squashing redundant commits
      
      * Made start_remainder similar to DistributedSamplerWithLoop
      
      Minor spacing update
      
      Made start_remainder similar to DistributedSamplerWithLoop
      
      * Test and styling
      
      * Rename test
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      0282e24e
    • Sylvain Gugger's avatar
      Check copies blackify (#10775) · 40b049c7
      Sylvain Gugger authored
      * Apply black before checking copies
      
      * Fix for class methods
      
      * Deal with lonely brackets
      
      * Remove debug and add forward changes
      
      * Separate copies and fix test
      
      * Add black as a test dependency
      40b049c7
    • Stas Bekman's avatar
    • Stas Bekman's avatar
      [DeepSpeed] improve checkpoint loading code plus tests (#10760) · cd8c93f7
      Stas Bekman authored
      * deepspeed checkpoint loading code plus tests
      
      * style
      
      * style
      cd8c93f7
    • Patrick von Platen's avatar
      small improvements (#10773) · 0486ccdd
      Patrick von Platen authored
      0486ccdd
    • Patrick von Platen's avatar
      up (#10771) · f20d75a1
      Patrick von Platen authored
      f20d75a1
  12. 16 Mar, 2021 3 commits
  13. 15 Mar, 2021 6 commits
  14. 12 Mar, 2021 3 commits
    • Lysandre Debut's avatar
      TensorFlow tests: having from_pt set to True requires torch to be installed. (#10664) · 184ef8ec
      Lysandre Debut authored
      * TF model exists for Blenderbot 400M
      
      * Marian
      
      * RAG
      184ef8ec
    • Nicolas Patry's avatar
      Adding new parameter to `generate`: `max_time`. (#9846) · 543d0549
      Nicolas Patry authored
      * [WIP] Adding new parameter to `generate`:  `max_time`.
      
      Generation by tokens number is sometimes a bit clunky because we don't
      know how many tokens are good enough or even how many tokens are in
      the payload (for pipelines users for instance). This leads to hard
      to understand behavior.
      
      This PR proposes a new argument `max_time` which is a float of seconds
      for the allowed time for `generate` to run on.
      Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
      generate as many tokens as possible within time budget.
      
      NB: Another possible approach consists of passing a callback to `generate`
        putting the caller in charge of the actual decision of when to stop
        generating tokens. It opens the door to 'which args should we pass'
        to this callback. It's hard to imagine other use-cases for this
        early stopping behavior than time (that are not already covered by
        parameters of generate)
      
      * Revamp with StoppingCriteria
      
      * Removing deprecated mentions.
      
      * Forgot arguments to stopping criteria.
      
      * Readding max_length it's not just used as a stopping criteria.
      
      * Default value for `stopping_criteria`.
      
      * Address @patrickvonplaten comments.
      
      - More docstrings
      - Actual doc
      - Include in global namespace
      - Remove TF work.
      
      * Put back `max_length` (deprecation different PR).
      
      * Doc quality.
      
      * Fixing old behavior without `stopping_criteria` but with `max_length`.
      
      Making sure we don't break that in the future.
      
      * Adding more tests for possible inconsistencies between
      
      `max_length` and `stopping_criteria`.
      
      * Fixing the torch imports.
      543d0549
    • Lysandre Debut's avatar
      Adjust loss difference (#10669) · ea46e3fa
      Lysandre Debut authored
      ea46e3fa