"tests/test_tokenization_distilbert.py" did not exist on "067395d5c56ef9026c442e691b6458ac196e3cf9"
- 13 May, 2020 6 commits
-
-
Sam Shleifer authored
[Marian Fixes] prevent predicting pad_token_id before softmax, support language codes, name multilingual models (#4290)
-
Patrick von Platen authored
* add first text for generation * add generation pipeline to usage * Created using Colaboratory * correct docstring * finish
-
Elyes Manai authored
-
Julien Plu authored
* Add QA trainer example for TF * Make data_dir optional * Fix parameter logic * Fix feature convert * Update the READMEs to add the question-answering task * Apply style * Change 'sequence-classification' to 'text-classification' and prefix with 'eval' all the metric names * Apply style * Apply style
-
Denis authored
Fix for #3865. PretrainedTokenizer mapped " do not" into " don't" when .decode(...) is called. Removed the " do not" --> " don't" mapping from clean_up_tokenization(...). (#4024)
-
Julien Chaumond authored
* Improvements to the wandb integration * small reorg + no global necessary * feat(trainer): log epoch and final metrics * Simplify logging a bit * Fixup * Fix crash when just running eval Co-authored-by:
Chris Van Pelt <vanpelt@gmail.com> Co-authored-by:
Boris Dayma <boris.dayma@gmail.com>
-
- 12 May, 2020 12 commits
-
-
Funtowicz Morgan authored
* Allow BatchEncoding to be initialized empty. This is required by recent changes introduced in TF 2.2. * Attempt to unpin Tensorflow to 2.2 with the previous commit.
-
Sava艧 Y谋ld谋r谋m authored
-
Sava艧 Y谋ld谋r谋m authored
-
Stefan Schweter authored
-
Viktor Alm authored
-
Julien Chaumond authored
-
Viktor Alm authored
* catch gpu len 1 set to gpu0 * Add mpc to trainer * Add MPC for TF * fix TF automodel for MPC and add Albert * Apply style * Fix import * Note to self: double check * Make shape None, None for datasetgenerator output shapes * Add from_pt bool which doesnt seem to work * Original checkpoint dir * Fix docstrings for automodel * Update readme and apply style * Colab should probably not be from users * Colabs should probably not be from users * Add colab * Update README.md * Update README.md * Cleanup __intit__ * Cleanup flake8 trailing comma * Update src/transformers/training_args_tf.py * Update src/transformers/modeling_tf_auto.py Co-authored-by:
Viktor Alm <viktoralm@pop-os.localdomain> Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
Levent Serinol authored
fixed missing torch module import in example usage code
-
Jangwon Park authored
-
Lysandre Debut authored
* pin TF to 2.1 * Pin flake8 as well
-
Julien Chaumond authored
-
Julien Chaumond authored
-
- 11 May, 2020 17 commits
-
-
Lysandre Debut authored
-
Bram Vanroy authored
* simplify cache vars and allow for TRANSFORMERS_CACHE env As it currently stands, "TRANSFORMERS_CACHE" is not an accepted variable. It seems that the these variables were not updated when moving from version pytorch_transformers to transformers. In addition, the fallback procedure could be improved. and simplified. Pathlib seems redundant here. * Update file_utils.py
-
Lysandre Debut authored
-
Tianlei Wu authored
* allow gpt2 to be exported to valid ONNX model * cast size from int to float explictly
-
Guo, Quan authored
"Migrating from pytorch-transformers to transformers" is missing in the main document. It is available in the main `readme` thought. Just move it to the document.
-
Lysandre Debut authored
-
fgaim authored
* Add ALBERT to convert command of transformers-cli * Document ALBERT tf to pytorch model conversion
-
Stefan Schweter authored
* docs: fix link to token classification (NER) example * examples: fix links to NER scripts
-
Funtowicz Morgan authored
-
Sam Shleifer authored
-
Levent Serinol authored
* Create README.md * Update model_cards/lserinol/bert-turkish-question-answering/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Julien Plu authored
* Fix the issue to properly run the accumulator with TF 2.2 * Apply style * Fix training_args_tf for TF 2.2 * Fix the TF training args when only one GPU is available * Remove the fixed version of TF in setup.py
-
Sava艧 Y谋ld谋r谋m authored
-
Julien Plu authored
-
theblackcat102 authored
-
Patrick von Platen authored
* adapt convert script * update convert script * finish * fix marian pretrained docs
-
Patrick von Platen authored
-
- 10 May, 2020 3 commits
-
-
flozi00 authored
-
Sam Shleifer authored
- MarianSentencepieceTokenizer - > MarianTokenizer - Start using unk token. - add docs page - add better generation params to MarianConfig - more conversion utilities
-
Girishkumar authored
-
- 08 May, 2020 2 commits
-
-
Julien Chaumond authored
* [TPU] Doc, fix xla_spawn.py, only preprocess dataset once * Update examples/README.md * [xla_spawn] Add `_mp_fn` to other Trainer scripts * [TPU] Fix: eval dataloader was None
-
Julien Chaumond authored
-