- 28 Apr, 2020 8 commits
-
-
Louis MARTIN authored
-
Bogdan Kosti膰 authored
-
jazzcook15 authored
* Update sqrt computation so it can survive a torch.jit.trace * Update modeling_gpt2.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-
Patrick von Platen authored
* change encoder decoder style to bart & t5 style * make encoder decoder generation dummy work for bert * make style * clean init config in encoder decoder * add tests for encoder decoder models * refactor and add last tests * refactor and add last tests * fix attn masks for bert encoder decoder * make style * refactor prepare inputs for Bert * refactor * finish encoder decoder * correct typo * add docstring to config * finish * add tests * better naming * make style * fix flake8 * clean docstring * make style * rename
-
Patrick von Platen authored
* fix empty prompt * fix length in generation pipeline
-
Patrick von Platen authored
-
Stefan Schweter authored
-
- 27 Apr, 2020 9 commits
-
-
martindh authored
Model card for illuin release of camembert-base-fquad
-
Manuel Romero authored
-
Manuel Romero authored
-
Nick Doiron authored
-
monologg authored
-
Sai Saketh Aluru authored
* Add dehatebert-mono-arabic readme card * Update dehatebert-mono-arabic model card
-
sshleifer authored
-
Lorenzo Ampil authored
* Fix tpo in into and add line under * Add missing blank line under * Correct types under
-
Julien Chaumond authored
-
- 25 Apr, 2020 1 commit
-
-
Txus authored
-
- 24 Apr, 2020 6 commits
-
-
Junyi_Li authored
-
Manuel Romero authored
-
Leandro von Werra authored
* add model card for gpt2-imdb-ctrl * fix title * add sentiment control description
-
YuvalPeleg authored
-
Julien Chaumond authored
Close #3921
-
Cola authored
* Shuffle train subset * Cleaner shuffle
-
- 23 Apr, 2020 9 commits
-
-
Julien Chaumond authored
Close #3920
-
Julien Chaumond authored
See https://github.com/huggingface/transformers/pull/3881/files#r412292660 Should we remove SRC_DIR from sys.path right after the imports, @aaugustin?
-
mneilly-et authored
-
Clement authored
-
Jared T Nielsen authored
Fix TFAlbertForSequenceClassification classifier dropout probability. It was set to config.hidden_dropout_prob, but should be config.classifier_dropout_prob. (#3928)
-
peterandluc authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
cc @sshleifer
-
- 22 Apr, 2020 6 commits
-
-
Manuel Romero authored
Model: TinyBERT-spanish-uncased-finetuned-ner
-
Manuel Romero authored
-
Anthony MOI authored
-
Lorenzo Ampil authored
* Add GenerationPipeline * Fix parameter names * Correct parameter __call__ parameters * Add model type attribute and correct function calls for prepare_input * Take out trailing commas from init attributes * Remove unnecessary tokenization line * Implement support for multiple text inputs * Apply generation support for multiple input text prompts * Take out tensor coersion * Take out batch index * Add text prompt to return sequence * Squeeze token tensore before decoding * Return only a single list of sequences if only one prompt was used * Correct results variable name * Add GenerationPipeline to SUPPORTED_TASKS with the alias , initalized w GPT2 * Registedred AutoModelWithLMHead for both pt and t * Update docstring for GenerationPipeline * Add kwargs parameter to mode.generate * Take out kwargs parameter after all * Add generation pipeline example in pipeline docstring * Fix max length by squeezing tokens tensor * Apply ensure_tensor_on_device to pytorch tensor * Include generation step in torch.no_grad * Take out input from prepare_xlm_input and set 'en' as default xlm_language * Apply framework specific encoding during prepare_input * Format w make style * Move GenerationPipeline import to follow proper import sorting * Take out training comma from generation dict * Apply requested changes * Change name to TextGenerationPipeline * Apply TextGenerationPipeline rename to __init___ * Changing alias to * Set input mapping as input to ensure_tensor_on_device * Fix assertion placement * Add test_text_generation * Add TextGenerationPipeline to PipelineCommonTests * Take out whitespace * Format __init__ w black * Fix __init__ style * Forman __init___ * Add line to end of __init__ * Correct model tokenizer set for test_text_generation * Ensure to return list of list, not list of string (to pass test) * Limit test models to only 3 to limit runtime to address circleCI timeout error * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Remove argument docstring, __init__, add additional __call__ arguments, and reformat results to list of dict * Fix blank result list * Add TextGenerationPipeline to pipelines.rst * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix typos from adding PADDING_TEXT_TOKEN_LENGTH * Fix incorrectly moved result list * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py * Update src/transformers/pipelines.py Co-Authored-By:
Patrick von Platen <patrick.v.platen@gmail.com> * Add back generation line and make style * Take out blank whitespace * Apply new alis, text-generation, to test_pipelines * Fix text generation alias in test * Update src/transformers/pipelines.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
Julien Chaumond authored
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-
- 21 Apr, 2020 1 commit
-
-
Julien Chaumond authored
-