- 05 Apr, 2021 1 commit
-
-
Lysandre Debut authored
-
- 01 Apr, 2021 2 commits
-
-
cronoik authored
* closes #10258 * typo * reworked deberta test * implemented the comments from BigBird01 regarding sequence pair encoding of deberta * Update style * VOCAB_FILES_NAMES is now a oneliner as suggested by @sgugger Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * added #fmt: on as requested by @sgugger * Style Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
NielsRogge authored
* Squash all commits into one * Update ViTFeatureExtractor to use image_utils instead of torchvision * Remove torchvision and add Pillow * Small docs improvement * Address most comments by @sgugger * Fix tests * Clean up conversion script * Pooler first draft * Fix quality * Improve conversion script * Make style and quality * Make fix-copies * Minor docs improvements * Should use fix-copies instead of manual handling * Revert "Should use fix-copies instead of manual handling" This reverts commit fd4e591bce4496d41406425c82606a8fdaf8a50b. * Place ViT in alphabetical order Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 31 Mar, 2021 4 commits
-
-
Sylvain Gugger authored
* Replace is_sagemaker_distributed_available * Merge SageMakerTrainer into Trainer * Test with shorter condition * Put back deleted line * Deprecate SageMakerTrainer and SageMakerTrainingArguments * Apply suggestions from code review Co-authored-by:
Philipp Schmid <32632186+philschmid@users.noreply.github.com> Co-authored-by:
Philipp Schmid <32632186+philschmid@users.noreply.github.com>
-
Sylvain Gugger authored
* First third * Styling and fix mistake * Quality * All the rest * Treat %s and %d * typo * Missing ) * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Philipp Schmid authored
* wrong makefile command * ddp test fix
-
Patrick von Platen authored
* add first code structures * add all bert models * add to init and docs * correct docs * make style
-
- 30 Mar, 2021 6 commits
-
-
Suraj Patil authored
* fix checkpoint names * auto model * fix doc
-
Patrick von Platen authored
-
Suraj Patil authored
* lets begin * boom boom * fix out proj in attn * fix attention * fix local attention * add tokenizer * fix imports * autotokenizer * fix checkpoint name * cleanup * more clean-up * more cleanup * output attentions * fix attn mask creation * fix imports * config doc * add tests * add slow tests * quality * add conversion script * copyright * typo * another bites the dust * fix attention tests * doc * add embed init in convert function * fix copies * remove tokenizer * enable caching * address review comments * improve config and create attn layer list internally * more consistent naming * init hf config from mesh-tf config json file * remove neo tokenizer from doc * handle attention_mask in local attn layer * attn_layers => attention_layers * add tokenizer_class in config * fix docstring * raise if len of attention_layers is not same as num_layers * remove tokenizer_class from config * more consistent naming * fix doc * fix checkpoint names * fp16 compat * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Patrick von Platen authored
* save intermediate * finish first version * delete some more * improve import * fix roberta * Update src/transformers/modeling_flax_pytorch_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_flax_pytorch_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * small corrections * apply all comments * fix deterministic * make fix-copies Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Philipp Schmid authored
* init * first working test * added todo for setup.py * working test for single node multi node ddp and smd * added tensorflow single node test * added directory for pytorch and tensorflow due to different requirements.txt * added directory for pytorch and tensorflow * added comment for run_glue until it is available * added output_dir to it * smaller dataset to make test running faster * adjust HP and script * adjusted parameter for tensorflow * refactored test scripts * adjusted make file * init * first working test * added todo for setup.py * working test for single node multi node ddp and smd * added tensorflow single node test * added directory for pytorch and tensorflow due to different requirements.txt * added directory for pytorch and tensorflow * added comment for run_glue until it is available * added output_dir to it * smaller dataset to make test running faster * adjust HP and script * adjusted parameter for tensorflow * refactored test scripts * adjusted make file * updated dlc container * commented in all tests * added both ecr images * added new master branches * debug * added new datasets version * init * strange rebase bug * removed changes * changed min version for tests to work * updated DLC * added model parallel test * removed test files * removed test files * tested with ned dlc * added correct sagemaker sdk version * adjust DLCs for official one * reworked tests * quality * removed default profile added documentation to it * added step in release for sagemaker tests * reverted version for example script removed duplicated script and added install from master to requirements.txt * removed mistaken .DS_Stores from mac * fixed tests * added Sylvains feedback * make style * added lysandre's feedback
-
Vasudev Gupta authored
* init bigbird * model.__init__ working, conversion script ready, config updated * add conversion script * BigBirdEmbeddings working :) * slightly update conversion script * BigBirdAttention working :) ; some bug in layer.output.dense * add debugger-notebook * forward() working for BigBirdModel :) ; replaced gelu with gelu_fast * tf code adapted to torch till rand_attn in bigbird_block_sparse_attention ; till now everything working :) * BigBirdModel working in block-sparse attention mode :) * add BigBirdForPreTraining * small fix * add tokenizer for BigBirdModel * fix config & hence modeling * fix base prefix * init testing * init tokenizer test * pos_embed must be absolute, attn_type=original_full when add_cross_attn=True , nsp loss is optional in BigBirdForPreTraining, add assert statements * remove position_embedding_type arg * complete normal tests * add comments to block sparse attention * add attn_probs for sliding & global tokens * create fn for block sparse attn mask creation * add special tests * restore pos embed arg * minor fix * attn probs update * make big bird fully gpu friendly * fix tests * remove pruning * correct tokenzier & minor fixes * update conversion script , remove norm_type * tokenizer-inference test add * remove extra comments * add docs * save intermediate * finish trivia_qa conversion * small update to forward * correct qa and layer * better error message * BigBird QA ready * fix rebased * add triva-qa debugger notebook * qa setup * fixed till embeddings * some issue in q/k/v_layer * fix bug in conversion-script * fixed till self-attn * qa fixed except layer norm * add qa end2end test * fix gradient ckpting ; other qa test * speed-up big bird a bit * hub_id=google * clean up * make quality * speed up einsum with bmm * finish perf improvements for big bird * remove wav2vec2 tok * fix tokenizer * include docs * correct docs * add helper to auto pad block size * make style * remove fast tokenizer for now * fix some * add pad test * finish * fix some bugs * fix another bug * fix buffer tokens * fix comment and merge from master * add comments * make style * commit some suggestions Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix typos * fix some more suggestions * add another patch Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix copies * another path Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * update * update nit suggestions * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 26 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* Add ImageFeatureExtractionMixin * Add dummy vision objects * Add require_vision * Add tests * Fix test
-
- 25 Mar, 2021 1 commit
-
-
Amir Tahmasbi authored
* Added embeddings layer * Added layoutlm layers, main model, maskedlm and token classification classes * Added model classes to tf auto models * Added model to PT to TF conversion script * Added model to doc README * Added tests * Removed unused imports * Added layoutlm model, test, and doc for sequence classification, and fix imports in __init__.py * Made tests pass! * Fixed typos in imports and docs * Fixed a typo in embeddings layer * Removed imports * Fixed formatting issues, imports, tests * Added layoutlm layers, main model, maskedlm and token classification classes * Added model classes to tf auto models * Added model to PT to TF conversion script * Removed unused imports * Added layoutlm model, test, and doc for sequence classification, and fix imports in __init__.py * Made tests pass! * Fixed typos in imports and docs * Removed imports * Fixed small formatting issues * Removed duplicates import from main __init__.py * Chnaged deafult arg to true for adding pooling layer to tf layoutlm * Fixed formatting issues * Style * Added copied from to classes copied from bert * Fixed doc strings examples to work with layoutlm inputs * Removed PyTorch reference in doc strings example * Added integration tests * Cleaned up initialization file * Updated model checkpoint identifiers * Fixed imports Co-authored-by:
Amir Tahmasbi <amir@ehsai.ca> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
- 23 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
-
- 22 Mar, 2021 1 commit
-
-
Patrick von Platen authored
* push * finish * finish * make fix copies * change name
-
- 19 Mar, 2021 1 commit
-
-
Th茅o Matussi猫re authored
* fix backend tokenizer args override: key mismatch * no touching the docs * fix mpnet * add mpnet to test * fix test Co-authored-by:theo <theo@matussie.re>
-
- 18 Mar, 2021 3 commits
-
-
Sylvain Gugger authored
* Fix distributed evaluation * Use logger
-
Vimarsh Chaturvedi authored
* Added check to ensure model name passed to from_pretrained and model are the same * Added test to check from_pretrained throws assert error when passed an incompatiable model name * Modified assert in from_pretrained with f-strings. Modified test to ensure desired assert message is being generated * Added check to ensure config and model has model_type * Fix FlauBERT heads Co-authored-by: vimarsh chaturvedi <vimarsh chaturvedi> Co-authored-by:
Stas Bekman <stas@stason.org> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
Patrick von Platen authored
* Create modeling_flax_eletra with code copied from modeling_flax_bert * Add ElectraForMaskedLM and ElectraForPretraining * Add modeling test for Flax electra and fix naming and arg in Flax Electra model * Add documentation * Fix code style * Create modeling_flax_eletra with code copied from modeling_flax_bert * Add ElectraForMaskedLM and ElectraForPretraining * Add modeling test for Flax electra and fix naming and arg in Flax Electra model * Add documentation * Fix code style * Fix code quality * Adjust tol in assert_almost_equal due to very small difference between model output, ranging 0.0010 - 0.0016 * Remove redundant ElectraPooler * save intermediate * adapt * correct bert flax design * adapt roberta as well * finish roberta flax * finish * apply suggestions * apply suggestions Co-authored-by:Chris Nguyen <anhtu2687@gmail.com>
-
- 17 Mar, 2021 6 commits
-
-
Mansi Mane authored
* Added debug prints * Added config * Added prints * Added prints * Added extra samples to SequentialDistributedSampler * Added extra samples to SequentialDistributedSampler Updated SequentialDistributedSampler call * Added deubg prints * Removed extra prints * Making predicitons and labels multiple of batchsize * updated number of microbatches * Removed extra prints * Made start_remainder similar to DistributedSamplerWithLoop * Minor spacing update * Added debug prints Added config Added prints Added prints * Added extra samples to SequentialDistributedSampler Updated SequentialDistributedSampler call Added extra samples to SequentialDistributedSampler Added deubg prints Removed extra prints Making predicitons and labels multiple of batchsize updated number of microbatches Removed extra prints Squashing redundant commits * Made start_remainder similar to DistributedSamplerWithLoop Minor spacing update Made start_remainder similar to DistributedSamplerWithLoop * Test and styling * Rename test Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Sylvain Gugger authored
* Apply black before checking copies * Fix for class methods * Deal with lonely brackets * Remove debug and add forward changes * Separate copies and fix test * Add black as a test dependency
-
Stas Bekman authored
-
Stas Bekman authored
* deepspeed checkpoint loading code plus tests * style * style
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 16 Mar, 2021 3 commits
-
-
Lysandre Debut authored
* Patches the full import failure and adds a test * Add comment
-
Sylvain Gugger authored
* Add DistributedSamplerWithLoop * Fix typo * Test and small fix
-
Patrick von Platen authored
* make flax tests pytorch independent * fix typo * finish * improve circle ci * fix return tensors * correct flax test * re-add sentencepiece * last tokenizer fixes * finish maybe now
-
- 15 Mar, 2021 6 commits
-
-
Joe Davison authored
-
Lysandre Debut authored
* Tests run on Docker Co-authored-by:
Morgan <funtowiczmo@gmail.com> * Comments from code review * Reply to itself * Dependencies Co-authored-by:
Morgan <funtowiczmo@gmail.com>
-
Patrick von Platen authored
-
Adam Pocock authored
* Adding required flags to non-default arguments. Signed-off-by:
Adam Pocock <adam.pocock@oracle.com> * make style fix. * Update src/transformers/hf_argparser.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Igor Shalyminov authored
* GPT2DoubleHeadsModel made parallelizeable * GPT2DoubleHeadsModel added as parallelizeable onto the GPT2 test suite
-
Patrick von Platen authored
-
- 12 Mar, 2021 3 commits
-
-
Lysandre Debut authored
* TF model exists for Blenderbot 400M * Marian * RAG
-
Nicolas Patry authored
* [WIP] Adding new parameter to `generate`: `max_time`. Generation by tokens number is sometimes a bit clunky because we don't know how many tokens are good enough or even how many tokens are in the payload (for pipelines users for instance). This leads to hard to understand behavior. This PR proposes a new argument `max_time` which is a float of seconds for the allowed time for `generate` to run on. Ideally combinations of `max_tokens=None`, `max_time=2` could be used to generate as many tokens as possible within time budget. NB: Another possible approach consists of passing a callback to `generate` putting the caller in charge of the actual decision of when to stop generating tokens. It opens the door to 'which args should we pass' to this callback. It's hard to imagine other use-cases for this early stopping behavior than time (that are not already covered by parameters of generate) * Revamp with StoppingCriteria * Removing deprecated mentions. * Forgot arguments to stopping criteria. * Readding max_length it's not just used as a stopping criteria. * Default value for `stopping_criteria`. * Address @patrickvonplaten comments. - More docstrings - Actual doc - Include in global namespace - Remove TF work. * Put back `max_length` (deprecation different PR). * Doc quality. * Fixing old behavior without `stopping_criteria` but with `max_length`. Making sure we don't break that in the future. * Adding more tests for possible inconsistencies between `max_length` and `stopping_criteria`. * Fixing the torch imports.
-
Lysandre Debut authored
-
- 11 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* PoC * Fix slow tests for the PT1.8 Embedding problem
-