- 19 May, 2020 7 commits
-
-
Iz Beltagy authored
* first commit * bug fixes * better examples * undo padding * remove wrong VOCAB_FILES_NAMES * License * make style * make isort happy * unit tests * integration test * make `black` happy by undoing `isort` changes!! * lint * no need for the padding value * batch_size not bsz * remove unused type casting * seqlen not seq_len * staticmethod * `bert` selfattention instead of `n2` * uint8 instead of bool + lints * pad inputs_embeds using embeddings not a constant * black * unit test with padding * fix unit tests * remove redundant unit test * upload model weights * resolve todo * simpler _mask_invalid_locations without lru_cache + backward compatible masked_fill_ * increase unittest coverage
-
Girishkumar authored
-
Shaoyen authored
* Map optimizer to correct device after loading from checkpoint. * Make style test pass Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Julien Chaumond authored
-
Julien Chaumond authored
* Distributed eval: SequentialDistributedSampler + gather all results * For consistency only write to disk from world_master Close https://github.com/huggingface/transformers/issues/4272 * Working distributed eval * Hook into scripts * Fix #3721 again * TPU.mesh_reduce: stay in tensor space Thanks @jysohn23 * Just a small comment * whitespace * torch.hub: pip install packaging * Add test scenarii
-
Julien Chaumond authored
* Test case for #3936 * multigpu tests pass on pytorch 1.4.0 * Fixup * multigpu tests pass on pytorch 1.5.0 * Update src/transformers/modeling_utils.py * Update src/transformers/modeling_utils.py * rename multigpu to require_multigpu * mode doc
-
Rakesh Chada authored
* makes fetching last learning late in trainer backward compatible * split comment to multiple lines * fixes black styling issue * uses version to create a more explicit logic
-
- 18 May, 2020 19 commits
-
-
Stefan Dumitrescu authored
* Create README.md * Create README.md * Update README.md * Update README.md * Apply suggestions from code review Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Oliver Guhr authored
-
Martin M眉ller authored
-
sy-wada authored
- add a citation. - modify the table of the BLUE benchmark. The table of the first version was not displayed correctly on https://huggingface.co/seiya/oubiobert-base-uncased. Could you please confirm that this fix will allow you to display it correctly?
-
Manuel Romero authored
I followed the google example of usage for its electra small model but i have seen it is not meaningful, so i created a better example
-
Suraj Patil authored
* add model card for t5-base-squad * Update model_cards/valhalla/t5-base-squad/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
HUSEIN ZOLKEPLI authored
* add bert bahasa readme * update readme * update readme * added xlnet * added tiny-bert and fix xlnet readme * added albert base * added albert tiny * added electra model * added gpt2 117m bahasa readme * added gpt2 345m bahasa readme * added t5-base-bahasa * fix readme * Update model_cards/huseinzol05/t5-base-bahasa-cased/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Funtowicz Morgan authored
* Adding optimizations block from ONNXRuntime. * Turn off external data format by default for PyTorch export. * Correct the way use_external_format is passed through the cmdline args.
-
Patrick von Platen authored
* Update README.md * Update README.md * Update README.md * Update README.md
-
Sam Shleifer authored
-
Boris Dayma authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* fix fp16 in t5 * make style * refactor invert_attention_mask fn * fix typo
-
Soham Chatterjee authored
-
Julien Chaumond authored
see https://github.com/huggingface/transformers/pull/4367#discussion_r426356693 Hat/tip @girishponkiya
-
Patrick von Platen authored
-
Funtowicz Morgan authored
-
Mehrad Moradshahi authored
-
- 17 May, 2020 1 commit
-
-
Lorenzo Ampil authored
* Add index to be returned by NerPipeline to allow for the creation of * Add entity groups * Convert entity list to dict * Add entity to entity_group_disagg atfter updating entity gorups * Change 'group' parameter to 'grouped_entities' * Add unit tests for grouped NER pipeline case * Correct variable name typo for NER_FINETUNED_MODELS * Sync grouped tests to recent test updates
-
- 15 May, 2020 10 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Nikita authored
-
Jared T Nielsen authored
-
Lysandre Debut authored
-
Funtowicz Morgan authored
-
Julien Chaumond authored
-
- 14 May, 2020 3 commits
-
-
Lysandre Debut authored
* Better p_mask building * Adressing @mfuntowicz comments
-
Morgan Funtowicz authored
-
Funtowicz Morgan authored
* Added generic ONNX conversion script for PyTorch model. * WIP initial TF support. * TensorFlow/Keras ONNX export working. * Print framework version info * Add possibility to check the model is correctly loading on ONNX runtime. * Remove quantization option. * Specify ONNX opset version when exporting. * Formatting. * Remove unused imports. * Make functions more generally reusable from other part of the code. * isort happy. * flake happy * Export only feature-extraction for now * Correctly check inputs order / filter before export. * Removed task variable * Fix invalid args call in load_graph_from_args. * Fix invalid args call in convert. * Fix invalid args call in infer_shapes. * Raise exception and catch in caller function instead of exit. * Add 04-onnx-export.ipynb notebook * More WIP on the notebook * Remove unused imports * Simplify & remove unused constants. * Export with constant_folding in PyTorch * Let's try to put function args in the right order this time ... * Disable external_data_format temporary * ONNX notebook draft ready. * Updated notebooks charts + wording * Correct error while exporting last chart in notebook. * Adressing @LysandreJik comment. * Set ONNX opset to 11 as default value. * Set opset param mandatory * Added ONNX export unittests * Quality. * flake8 happy * Add keras2onnx dependency on extras["tf"] * Pin keras2onnx on github master to v1.6.5 * Second attempt. * Third attempt. * Use the right repo URL this time ... * Do the same for onnxconverter-common * Added keras2onnx and onnxconveter-common to 1.7.0 to supports TF2.2 * Correct commit hash. * Addressing PR review: Optimization are enabled by default. * Addressing PR review: small changes in the notebook * setup.py comment about keras2onnx versioning.
-