"vscode:/vscode.git/clone" did not exist on "14b50ed01985e0431308674d8fd143d223032ee2"
- 04 Aug, 2020 1 commit
-
-
Stas Bekman authored
* improve unit tests this is a sample of one test according to the request in https://github.com/huggingface/transformers/issues/5973 before I apply it to the rest * batch 1 * batch 2 * batch 3 * batch 4 * batch 5 * style * non-tf template * last deletion of check_loss_output
-
- 03 Aug, 2020 1 commit
-
-
Julien Plu authored
* Fix TF Serving when output_hidden_states and output_attentions are True * Add tests for saved model creation + bug fix for multiple choices models * remove unused import * Fix the input for several layers * Fix test * Fix conflict printing * Apply style * Fix XLM and Flaubert for TensorFlow * Apply style * Fix TF check version * Apply style * Trigger CI
-
- 31 Jul, 2020 3 commits
-
-
Sylvain Gugger authored
* Use return_dict=True in all tests * Formatting
-
Suraj Patil authored
* add parse_dict to parse arguments from dict * add unit test for parse_dict
-
Stas Bekman authored
* enable easy checkout switch allow having multiple repository checkouts and not needing to remember to rerun 'pip install -e .[dev]' when switching between checkouts and running tests. * make isort happy * examples needs one too
-
- 30 Jul, 2020 3 commits
-
-
Stas Bekman authored
* 2 small typos * more typos * correct path
-
guillaume-be authored
* initial commit for pipeline implementation Addition of input processing and history concatenation * Conversation pipeline tested and working for single & multiple conversation inputs * Added docstrings for dialogue pipeline * Addition of dialogue pipeline integration tests * Delete test_t5.py * Fixed max code length * Updated styling * Fixed test broken by formatting tools * Removed unused import * Added unit test for DialoguePipeline * Fixed Tensorflow compatibility * Fixed multi-framework support using framework flag * - Fixed docstring - Added `min_length_for_response` as an initialization parameter - Renamed `*args` to `conversations`, `conversations` being a `Conversation` or a `List[Conversation]` - Updated truncation to truncate entire segments of conversations, instead of cutting in the middle of a user/bot input * - renamed pipeline name from dialogue to conversational - removed hardcoded default value of 1000 and use config.max_length instead - added `append_response` and `set_history` method to the Conversation class to avoid direct fields mutation - fixed bug in history truncation method * - Updated ConversationalPipeline to accept only active conversations (otherwise a ValueError is raised) * - Simplified input tensor conversion * - Updated attention_mask value for Tensorflow compatibility * - Updated last dialogue reference to conversational & fixed integration tests * Fixed conflict with master * Updates following review comments * Updated formatting * Added Conversation and ConversationalPipeline to the library __init__, addition of docstrings for Conversation, added both to the docs * Update src/transformers/pipelines.py Updated docsting following review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Switch from return_tuple to return_dict * Fix test * [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614) * Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleChoice} models and tests * AutoModels Tiny tweaks * Style * Final changes before merge * Re-order for simpler review * Final fixes * Addressing @sgugger's comments * Test MultipleChoice * Rework TF trainer (#6038) * Fully rework training/prediction loops * fix method name * Fix variable name * Fix property name * Fix scope * Fix method name * Fix tuple index * Fix tuple index * Fix indentation * Fix variable name * fix eval before log * Add drop remainder for test dataset * Fix step number + fix logging datetime * fix eval loss value * use global step instead of step + fix logging at step 0 * Fix logging datetime * Fix global_step usage * Fix breaking loop + logging datetime * Fix step in prediction loop * Fix step breaking * Fix train/test loops * Force TF at least 2.2 for the trainer * Use assert_cardinality to facilitate the dataset size computation * Log steps per epoch * Make tfds compliant with TPU * Make tfds compliant with TPU * Use TF dataset enumerate instead of the Python one * revert previous commit * Fix data_dir * Apply style * rebase on master * Address Sylvain's comments * Address Sylvain's and Lysandre comments * Trigger CI * Remove unused import * Switch from return_tuple to return_dict * Fix test * Add recent model Co-authored-by:Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Julien Plu <plu.julien@gmail.com>
-
- 29 Jul, 2020 2 commits
-
-
Lysandre Debut authored
* Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleChoice} models and tests * AutoModels Tiny tweaks * Style * Final changes before merge * Re-order for simpler review * Final fixes * Addressing @sgugger's comments * Test MultipleChoice -
Funtowicz Morgan authored
* Added capability to quantize a model while exporting through ONNX. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> We do not support multiple extensions Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Reformat files Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * More quality Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Ensure test_generate_identified_name compares the same object types Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added documentation everywhere on ONNX exporter Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Use pathlib.Path instead of plain-old string Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Use f-string everywhere Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Use the correct parameters for black formatting Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Use Python 3 super() style. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Use packaging.version to ensure installed onnxruntime version match requirements Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fixing imports sorting order. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Missing raise(s) Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added quantization documentation Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix some spelling. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix bad list header format Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co>
-
- 28 Jul, 2020 3 commits
-
-
Sam Shleifer authored
-
Sam Shleifer authored
-
Sam Shleifer authored
* MBART: support summarization tasks * fix test * Style * add tokenizer test
-
- 27 Jul, 2020 1 commit
-
-
Joe Davison authored
* add initial zero-shot pipeline * change default args * update default template * add label string splitting * add str labels support, remove nli from name * style * add input validation and working tf defaults * tests * quality check * add docstring to __call__ * add slow tests * Change truncation to only_first also lower precision on tests for readibility * style
-
- 23 Jul, 2020 2 commits
-
-
Sylvain Gugger authored
* Avoid unnecessary warnings when loading pretrained model * Fix test * Add other keys to ignore * keys_to_ignore_at_load -> authorized_missing_keys
-
Sam Shleifer authored
-
- 20 Jul, 2020 2 commits
-
-
Stas Bekman authored
* DataParallel fixes: 1. switched to a more precise check - if self.args.n_gpu > 1: + if isinstance(model, nn.DataParallel): 2. fix tests - require the same fixup under DataParallel as the training module * another fix
-
Pradhy729 authored
* Don't pass sampler for iterable dataset * Added check for test and eval dataloaders. * Formatting * Don't pass sampler for iterable dataset * Added check for test and eval dataloaders. * Formatting * Cleaner if nesting. * Added test for trainer and iterable dataset * Formatting for test * Fixed import when torch is available only. * Added require torch decorator to helper class * Moved dataset class inside unittest * Removed nested if and changed model in test * Checking torch availability for IterableDataset
-
- 18 Jul, 2020 3 commits
-
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
- 17 Jul, 2020 3 commits
-
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
Patrick von Platen authored
* fix merge rebase * add intermediate reformer code * save intermediate caching results * save intermediate * save intermediate results * save intermediate * upload next step * fix generate tests * make tests work * add named tuple output * Apply suggestions from code review * fix use_cache for False case * fix tensor to gpu * fix tensor to gpu * refactor * refactor and make style
- 16 Jul, 2020 1 commit
-
-
Patrick von Platen authored
-
- 15 Jul, 2020 3 commits
-
-
Sam Shleifer authored
-
Funtowicz Morgan authored
Signed-off-by:Morgan Funtowicz <morgan@huggingface.co>
-
Sam Shleifer authored
-
- 14 Jul, 2020 2 commits
-
-
Sam Shleifer authored
-
as-stevens authored
[Reformer classification head] Implement the reformer model classification head for text classification (#5198) * Reformer model head classification implementation for text classification * Reformat the reformer model classification code * PR review comments, and test case implementation for reformer for classification head changes * CI/CD reformer for classification head test import error fix * CI/CD test case implementation added ReformerForSequenceClassification to all_model_classes * Code formatting- fixed * Normal test cases added for reformer classification head * Fix test cases implementation for the reformer classification head * removed token_type_id parameter from the reformer classification head * fixed the test case for reformer classification head * merge conflict with master fixed * merge conflict, changed reformer classification to accept the choice_label parameter added in latest code * refactored the the reformer classification head test code * reformer classification head, common transform test cases fixed * final set of the review comment, rearranging the reformer classes and docstring add to classification forward method * fixed the compilation error and text case fix for reformer classification head * Apply suggestions from code review Remove unnecessary dup Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 13 Jul, 2020 2 commits
-
-
Stas Bekman authored
* implement FlaubertForTokenClassification as a subclass of XLMForTokenClassification * fix mapping order * add the doc * add common tests
-
Stas Bekman authored
-
- 10 Jul, 2020 1 commit
-
-
Sylvain Gugger authored
* [WIP] Proposal for model outputs * All Bert models * Make CI green maybe? * Fix ONNX test * Isolate ModelOutput from pt and tf * Formatting * Add Electra models * Auto-generate docstrings from outputs * Add TF outputs * Add some BERT models * Revert TF side * Remove last traces of TF changes * Fail with a clear error message * Add Albert and work through Bart * Add CTRL and DistilBert * Formatting * Progress on Bart * Renames and finish Bart * Formatting * Fix last test * Add DPR * Finish Electra and add FlauBERT * Add GPT2 * Add Longformer * Add MMBT * Add MobileBert * Add GPT * Formatting * Add Reformer * Add Roberta * Add T5 * Add Transformer XL * Fix test * Add XLM + fix XLMForTokenClassification * Style + XLMRoberta * Add XLNet * Formatting * Add doc of return_tuple arg
-
- 08 Jul, 2020 2 commits
-
-
Lorenzo Ampil authored
* Add B I handling to grouping * Add fix to include separate entity as last token * move last_idx definition outside loop * Use first entity in entity group as reference for entity type * Add test cases * Take out extra class accidentally added * Return tf ner grouped test to original * Take out redundant last entity * Get last_idx safely Co-authored-by:
ColleterVi <36503688+ColleterVi@users.noreply.github.com> * Fix first entity comment * Create separate functions for group_sub_entities and group_entities (splitting call method to testable functions) * Take out unnecessary last_idx * Remove additional forward pass test * Move token classification basic tests to separate class * Move token classification basic tests back to monocolumninputtestcase * Move base ner tests to nerpipelinetests * Take out unused kwargs * Add back mandatory_keys argument * Add unitary tests for group_entities in _test_ner_pipeline * Fix last entity handling * Fix grouping fucntion used * Add typing to group_sub_entities and group_entities Co-authored-by:
ColleterVi <36503688+ColleterVi@users.noreply.github.com>
-
Patrick von Platen authored
* tf_train * adapt timing for tpu * fix timing * fix timing * fix timing * fix timing * update notebook * add tests
-
- 07 Jul, 2020 5 commits
-
-
Sam Shleifer authored
improve unittests for finetuning, especially w.r.t testing frozen parameters fix freeze_embeds for T5 add streamlit setup.cfg
-
Patrick von Platen authored
[Almost all TF models] TF clean up: add missing CLM / MLM loss; fix T5 naming and keras compile (#5395) * add first version of clm tf * make style * add more tests for bert * update tf clm loss * fix tests * correct tf ner script * add mlm loss * delete bogus file * clean tf auto model + add tests * finish adding clm loss everywhere * fix training in distilbert * fix flake8 * save intermediate * fix tf t5 naming * remove prints * finish up * up * fix tf gpt2 * fix new test utils import * fix flake8 * keep backward compatibility * Update src/transformers/modeling_tf_albert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_auto.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_electra.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_roberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_mobilebert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_auto.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_bert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_distilbert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * apply sylvains suggestions Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Quentin Lhoest authored
* fix test imports * fix max_length * style * fix tests
-
Sam Shleifer authored
* Passing all but one torchscript test * Style * move comment * remove unneeded assert
-
Quentin Lhoest authored
* beginning of dpr modeling * wip * implement forward * remove biencoder + better init weights * export dpr model to embed model for nlp lib * add new api * remove old code * make style * fix dumb typo * don't load bert weights * docs * docs * style * move the `k` parameter * fix init_weights * add pretrained configs * minor * update config names * style * better config * style * clean code based on PR comments * change Dpr to DPR * fix config * switch encoder config to a dict * style * inheritance -> composition * add messages in assert startements * add dpr reader tokenizer * one tokenizer per model * fix base_model_prefix * fix imports * typo * add convert script * docs * change tokenizers conf names * style * change tokenizers conf names * minor * minor * fix wrong names * minor * remove unused convert functions * rename convert script * use return_tensors in tokenizers * remove n_questions dim * move generate logic to tokenizer * style * add docs * docs * quality * docs * add tests * style * add tokenization tests * DPR full tests * Stay true to the attention mask building * update docs * missing param in bert input docs * docs * style Co-authored-by:Lysandre <lysandre.debut@reseau.eseo.fr>
-