- 05 Jan, 2021 7 commits
-
-
Patrick von Platen authored
* create model * add integration * save current state * make integration tests pass * add one more test * add explanation to tests * remove from bart * add padding * remove unnecessary test * make all tests pass * re-add cookie cutter tests * finish PyTorch * fix attention test * Update tests/test_modeling_common.py * revert change * remove unused file * add string to doc * save intermediate * make tf integration tests pass * finish tf * fix doc * fix docs again * add led to doctree * add to auto tokenizer * added tips for led * make style * apply jplus statements * correct tf longformer * apply lysandres suggestions * apply sylvains suggestions * Apply suggestions from code review
-
Julien Plu authored
* Fix Funnel * Apply Patrick's comment * Remove comment * Fix dummy value * Apply style
-
Stas Bekman authored
* --model_parallel hasn't been implemented for most models * make the help clear as well * implement is_parallelizable; use it * oops * remove property
-
Julien Plu authored
-
Stas Bekman authored
This PR proposes to: * auto-flush `transformers` logging When using logging for tracing signals from different parts of the code and which could be mixed with print debug this aids to get all the logging events synchronized. I don't think this change will introduce any performance impacts. If it helps someone here is the code I used to sync `transformers` logging with various other debug prints. I was porting bart to MP and I needed to trace that the device switching happens correctly and I added a bunch of logger.info calls inside `modeling_bart.py` and also had some other helpers `print` debug messages which weren't logger based: ``` # auto flush std streams from sys import stdout, stderr def stdout_write_flush(args, w=stderr.write): w(args); stderr.flush() def stderr_write_flush(args, w=stderr.write): w(args); stderr.flush() stdout.write = stdout_write_flush stderr.write = stderr_write_flush from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig import logging import transformers.utils.logging import transformers.models.bart.modeling_bart # I wanted a shorter simpler format handlers = transformers.utils.logging._get_library_root_logger().handlers for handler in handlers: formatter = logging.Formatter("[%(funcName)s] %(message)s") handler.setFormatter(formatter) transformers.models.bart.modeling_bart.logger.setLevel(transformers.logging.INFO) ``` @LysandreJik, @sgugger, @patrickvonplaten -
Julien Plu authored
* Fix longformer * Apply style * Remove serving content * Forgot a condition * Apply style * Address Patrick's comments * Fix dtype
-
Boris Dayma authored
* feat(wandb): log artifacts * fix: typo * feat(wandb): ensure name is allowed * feat(wandb): log artifact * feat(wandb): saving logic * style: improve formatting * fix: unrelated typo * feat:聽use a fake trainer * fix:聽simplify * feat(wandb): log model files as artifact * style: fix style * docs(wandb): correct description * feat: unpack model + allow env Truethy values * feat: TrainerCallback can access tokenizer * style:聽fix style * feat(wandb): log more interesting metadata * feat: unpack tokenizer * feat(wandb): metadata with load_best_model_at_end * feat(wandb): more robust metadata * style(wandb): fix formatting
-
- 04 Jan, 2021 6 commits
-
-
Stas Bekman authored
-
Julien Plu authored
* Fix DPR * Keep usual models * Apply style * Address Sylvain's comments
-
Stas Bekman authored
This PR: * fixes trainer to have the logger agree with the actual default `output_dir`, but setting it one place and passing it as an argument to both places @sgugger
-
Julien Plu authored
-
Julien Plu authored
-
Charles authored
* add get_cached_models function * add List type to import * fix code quality * Update src/transformers/file_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/file_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/file_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/file_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/file_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix style Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 02 Jan, 2021 3 commits
-
-
Chris Kennedy authored
-
Patrick von Platen authored
* push * make style
-
Derrick Blakely authored
-
- 30 Dec, 2020 1 commit
-
-
Stas Bekman authored
-
- 29 Dec, 2020 1 commit
-
-
Stas Bekman authored
``` python -c "from apex.normalization import FusedProphetNetLayerNorm" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: cannot import name 'FusedProphetNetLayerNorm' from 'apex.normalization' (/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/apex/normalization/__init__.py) ``` It looks like this code has never been tested, so it silently fails inside try/except. Discovered this by accident in https://github.com/huggingface/transformers/issues/9338#issuecomment-752217708
-
- 28 Dec, 2020 2 commits
-
-
Julien Plu authored
-
Julien Plu authored
* Fix T5 * Fix test * Fix test
-
- 27 Dec, 2020 1 commit
-
-
Patrick von Platen authored
-
- 25 Dec, 2020 1 commit
-
-
Patrick von Platen authored
* correct gpt2 * fix gpt2 * fix use_cache ordering * correct past tolerance * fix for all cases * style
-
- 24 Dec, 2020 6 commits
-
-
Bram Vanroy authored
Missing "s" typo
-
Daniele Sartiano authored
* Update modeling_encoder_decoder.py Fixed typo. * typo Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
Ratthachat (Jung) authored
* Create modeling_tf_dpr.py * Add TFDPR * Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot last commit accidentally deleted these 4 lines, so I recover them back * Add TFDPR * Add TFDPR * clean up some comments, add TF input-style doc string * Add TFDPR * Make return_dict=False as default * Fix return_dict bug (in .from_pretrained) * Add get_input_embeddings() * Create test_modeling_tf_dpr.py The current version is already passed all 27 tests! Please see the test run at : https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing * fix quality * delete init weights * run fix copies * fix repo consis * del config_class, load_tf_weights They shoud be 'pytorch only' * add config_class back after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion * newline after .. note:: * import tf, np (Necessary for ModelIntegrationTest) * slow_test from_pretrained with from_pt=True At the moment we don't have TF weights (since we don't have official official TF model) Previously, I did not run slow test, so I missed this bug * Add simple TFDPRModelIntegrationTest Note that this is just a test that TF and Pytorch gives approx. the same output. However, I could not test with the official DPR repo's output yet * upload correct tf model * remove position_ids as missing keys * fix RagSeq generate with context_input_ids fix RagSeq generate with context_input_ids * apply style * delete unused lines * Add test_rag_sequence_generate_batch_from_context_input_ids * Readability improved * stylying * Stylize * typos * add check_model_generate_from_context_input_ids * make style * Apply suggestions from code review * make style2 Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
patrickvonplaten <patrick@huggingface.co>
-
Suraj Patil authored
-
Jungwhan authored
-
Jethro Kuan authored
Fixes #9244 Co-authored-by:Jethro Kuan <jethro.kuan@bytedance.com>
-
- 23 Dec, 2020 3 commits
-
-
Suraj Patil authored
* add past_key_values * add use_cache option * make mask before cutting ids * adjust position_ids according to past_key_values * flatten past_key_values * fix positional embeds * fix _reorder_cache * set use_cache to false when not decoder, fix attention mask init * add test for caching * add past_key_values for Roberta * fix position embeds * add caching test for roberta * add doc * make style * doc, fix attention mask, test * small fixes * adress patrick's comments * input_ids shouldn't start with pad token * use_cache only when decoder * make consistent with bert * make copies consistent * add use_cache to encoder * add past_key_values to tapas attention * apply suggestions from code review * make coppies consistent * add attn mask in tests * remove copied from longformer * apply suggestions from code review * fix bart test * nit * simplify model outputs * fix doc * fix output ordering
-
Xu Song authored
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
-
Xu Song authored
-
- 22 Dec, 2020 5 commits
-
-
Patrick von Platen authored
* adapt cookie cutter * fix copy past statement * delete copy statements for now * remove unused import from template * make doc rst * correct config docstring * correct training * correct inputs processing tf enc dec * make style * adapt templates * clean tabs * correct tensor -> Tensor naming * correct indent * correct templates * fix the test * break lines to avoid > 119 * Apply suggestions from code review
-
Julien Chaumond authored
-
Julien Plu authored
* Fix TF BART for saved model creation * Apply style * Update src/transformers/models/bart/modeling_tf_bart.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/bart/modeling_tf_bart.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Rework the fix * Fix condition * Apply style * Fix condition * Fix shape_list * Apply Patrick's solution * Apply Patrick's solution * Rebase * make tests pass Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
patrickvonplaten <patrick.v.platen@gmail.com>
-
Sylvain Gugger authored
* Add label smoothing in Trainer * Add options for scheduler and Adafactor in Trainer * Put Seq2SeqTrainer in the main lib * Apply suggestions from code review Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Address review comments and adapt scripts * Documentation * Move test not using script to tests folder Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* add tests * make style and fix bart bug * fix bart past key value edge case * correct tf bart test * fix gpt2 tf * fix t5 test
-
- 21 Dec, 2020 4 commits
-
-
Patrick von Platen authored
* add converter * delet unnecessary comments
-
Suraj Patil authored
* add base model classes to bart subclassed models * add doc
-
TobiasNorlund authored
-
Julien Plu authored
* Improve BERT-like models attention layers * Apply style * Put back error raising instead of assert * Update template * Fix copies * Apply raising valueerror in MPNet * Restore the copy check for the Intermediate layer in Longformer * Update longformer
-