"pytorch_transformers/modeling_gpt2.py" did not exist on "0dd796e359d1fbf9c0ea39b04e9b5655e5a09dee"
- 18 May, 2020 1 commit
-
-
Mehrad Moradshahi authored
-
- 17 May, 2020 1 commit
-
-
Lorenzo Ampil authored
* Add index to be returned by NerPipeline to allow for the creation of * Add entity groups * Convert entity list to dict * Add entity to entity_group_disagg atfter updating entity gorups * Change 'group' parameter to 'grouped_entities' * Add unit tests for grouped NER pipeline case * Correct variable name typo for NER_FINETUNED_MODELS * Sync grouped tests to recent test updates
-
- 15 May, 2020 10 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Nikita authored
-
Jared T Nielsen authored
-
Lysandre Debut authored
-
Funtowicz Morgan authored
-
Julien Chaumond authored
-
- 14 May, 2020 14 commits
-
-
Lysandre Debut authored
* Better p_mask building * Adressing @mfuntowicz comments
-
Morgan Funtowicz authored
-
Funtowicz Morgan authored
* Added generic ONNX conversion script for PyTorch model. * WIP initial TF support. * TensorFlow/Keras ONNX export working. * Print framework version info * Add possibility to check the model is correctly loading on ONNX runtime. * Remove quantization option. * Specify ONNX opset version when exporting. * Formatting. * Remove unused imports. * Make functions more generally reusable from other part of the code. * isort happy. * flake happy * Export only feature-extraction for now * Correctly check inputs order / filter before export. * Removed task variable * Fix invalid args call in load_graph_from_args. * Fix invalid args call in convert. * Fix invalid args call in infer_shapes. * Raise exception and catch in caller function instead of exit. * Add 04-onnx-export.ipynb notebook * More WIP on the notebook * Remove unused imports * Simplify & remove unused constants. * Export with constant_folding in PyTorch * Let's try to put function args in the right order this time ... * Disable external_data_format temporary * ONNX notebook draft ready. * Updated notebooks charts + wording * Correct error while exporting last chart in notebook. * Adressing @LysandreJik comment. * Set ONNX opset to 11 as default value. * Set opset param mandatory * Added ONNX export unittests * Quality. * flake8 happy * Add keras2onnx dependency on extras["tf"] * Pin keras2onnx on github master to v1.6.5 * Second attempt. * Third attempt. * Use the right repo URL this time ... * Do the same for onnxconverter-common * Added keras2onnx and onnxconveter-common to 1.7.0 to supports TF2.2 * Correct commit hash. * Addressing PR review: Optimization are enabled by default. * Addressing PR review: small changes in the notebook * setup.py comment about keras2onnx versioning.
-
Suraj Patil authored
* fix loss calculation in evaluation * fix evaluation on TPU when prediction_loss_only is True
-
Savaş Yıldırım authored
* Create README.md * Update model_cards/savasy/bert-base-turkish-squad/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
sy-wada authored
-
Sam Shleifer authored
-
Sam Shleifer authored
covers torch and tf. Also fixes a failing @slow test
-
Julien Chaumond authored
* Fix: unpin flake8 and fix cs errors * Ok we still need to quote those
-
Julien Chaumond authored
see context in https://github.com/huggingface/transformers/pull/4223
-
Julien Chaumond authored
-
Lysandre Debut authored
-
Viktor Alm authored
Unfortunately i accidentally orphaned my other PR
-
Manuel Romero authored
-
- 13 May, 2020 7 commits
-
-
Lysandre authored
-
Sam Shleifer authored
[Marian Fixes] prevent predicting pad_token_id before softmax, support language codes, name multilingual models (#4290)
-
Patrick von Platen authored
* add first text for generation * add generation pipeline to usage * Created using Colaboratory * correct docstring * finish
-
Elyes Manai authored
-
Julien Plu authored
* Add QA trainer example for TF * Make data_dir optional * Fix parameter logic * Fix feature convert * Update the READMEs to add the question-answering task * Apply style * Change 'sequence-classification' to 'text-classification' and prefix with 'eval' all the metric names * Apply style * Apply style
-
Denis authored
Fix for #3865. PretrainedTokenizer mapped " do not" into " don't" when .decode(...) is called. Removed the " do not" --> " don't" mapping from clean_up_tokenization(...). (#4024)
-
Julien Chaumond authored
* Improvements to the wandb integration * small reorg + no global necessary * feat(trainer): log epoch and final metrics * Simplify logging a bit * Fixup * Fix crash when just running eval Co-authored-by:
Chris Van Pelt <vanpelt@gmail.com> Co-authored-by:
Boris Dayma <boris.dayma@gmail.com>
-
- 12 May, 2020 7 commits
-
-
Funtowicz Morgan authored
* Allow BatchEncoding to be initialized empty. This is required by recent changes introduced in TF 2.2. * Attempt to unpin Tensorflow to 2.2 with the previous commit.
-
Savaş Yıldırım authored
-
Savaş Yıldırım authored
-
Stefan Schweter authored
-
Viktor Alm authored
-
Julien Chaumond authored
-
Viktor Alm authored
* catch gpu len 1 set to gpu0 * Add mpc to trainer * Add MPC for TF * fix TF automodel for MPC and add Albert * Apply style * Fix import * Note to self: double check * Make shape None, None for datasetgenerator output shapes * Add from_pt bool which doesnt seem to work * Original checkpoint dir * Fix docstrings for automodel * Update readme and apply style * Colab should probably not be from users * Colabs should probably not be from users * Add colab * Update README.md * Update README.md * Cleanup __intit__ * Cleanup flake8 trailing comma * Update src/transformers/training_args_tf.py * Update src/transformers/modeling_tf_auto.py Co-authored-by:
Viktor Alm <viktoralm@pop-os.localdomain> Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-