"vscode:/vscode.git/clone" did not exist on "f063c56d942737d2c7aac93895cd8310afd9c7a4"
- 03 Aug, 2021 3 commits
-
-
Michal Szutenberg authored
This change enables tf.keras.mixed_precision with bf16
-
Philip May authored
* fix #12970 * Update tests/test_trainer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update tests/test_trainer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update tests/test_trainer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * remove unnecessary issue link * fix test formatting Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
-
- 02 Aug, 2021 3 commits
-
-
Chungman Lee authored
* fix typo in example/text-classification README * add space to align the table
-
Sylvain Gugger authored
-
Tadej Svetina authored
-
- 01 Aug, 2021 1 commit
-
-
Alex Hedges authored
-
- 30 Jul, 2021 6 commits
-
-
Stefan Schweter authored
-
Sylvain Gugger authored
-
Kevin Canwen Xu authored
* Add multilingual documentation support * Add multilingual documentation support * make style * make style * revert
-
wulu473 authored
Co-authored-by:Lukas Wutschitz <lukas.wutschitz@microsoft.com>
-
harshithapv authored
* minor change to log azureml only for rank 0 * fix typo
-
21jun authored
help for `ModelArguments.gradient_checkpointing` should be "If True, use gradient checkpointing to save memory at the expense of slower backward pass." not "Whether to freeze the feature extractor layers of the model." (which is duplicated from `freeze_feature_extractor` arg)
-
- 29 Jul, 2021 3 commits
-
-
Kevin Canwen Xu authored
* Add CpmTokenizerFast * Fix isort * Overwrite _batch_encode_plus
-
Nicolas Patry authored
* Update feature extraction pipelilne. * Leaving 1 small model for actual values check. * Fixes tests - Better support for tokenizer with no pad token - Increasing PegasusModelTesterConfig for pipelines - Test of feature extraction are more permissive + don't test Multimodel models + encoder-decoder. * Fixing model loading with incorrect shape (+ model with HEAD). * Update tests/test_pipelines_common.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Revert modeling_utils modification. * Some corrections. * Update tests/test_pipelines_common.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update tests/test_pipelines_feature_extraction.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Syntax. * Fixing text-classification tests. * Don't modify this file. Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Funtowicz Morgan authored
* Raise an issue if the pytorch version is < 1.8.0 * Attempt to add a test to ensure it correctly raises. * Missing docstring. * Second attempt, patch with string absolute import. * Let's do the call before checking it was called ... * use the correct function ...
馃う * Raise ImportError and AssertionError respectively when unable to find torch and torch version is not sufficient. * Correct path mock patching * relax constraint for torch_onnx_dict_inputs to ge instead of eq. * Style. * Split each version requirements for torch. * Let's compare version directly. * Import torch_version after checking pytorch is installed. * @require_torch
-
- 28 Jul, 2021 12 commits
-
-
Will Frey authored
Change `PreTrainedConfig` -> `PretrainedConfig` in the docstring for `AutoTokenizer.from_pretrained(...)`.
-
Will Frey authored
Fix `config.decoder.__class` -> `config.decoder.__class__`
-
Will Frey authored
Change `torch.Tensor` -> `torch.FloatTensor` in `TemperatureLogitsWarper` to be consistent with the `LogitsWarper` ABC signature annotation.
-
Will Frey authored
While `Iterable[Iterable[int]]` is a nicer annotation (it's covariant!), the defensive statements parsing out `bad_words_ids` in `__init__(...)` force the caller to pass in `List[List[int]]`. I've changed the annotation to make that clear.
-
chutaklee authored
* fix distiller * fix style
-
Will Frey authored
`_BaseAutoModelClass` was missing `classmethod` decorators on the `from_config(...)` and `from_pretrained(...)` methods.
-
Will Frey authored
Change `score` -> `scores` because the argument is not positional-only, so you need consistently named parameters for the subclasses. The subclasses appear to favor `scores` over `score`.
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Buddhi Chathuranga Senarathna authored
-
Elysium1436 authored
* Fixed train_test_split test_size argument * `Seq2SeqTrainer` set max_length and num_beams only when non None (#12899) * set max_length and num_beams only when non None * fix instance variables * fix code style * [FLAX] Minor fixes in CLM example (#12914) * readme: fix retrieval of vocab size for flax clm example * examples: fix flax clm example when using training/evaluation files * Fix module path for symbolic_trace example Co-authored-by:
cchen-dialpad <47165889+cchen-dialpad@users.noreply.github.com> Co-authored-by:
Stefan Schweter <stefan@schweter.it> Co-authored-by:
Sylvain Gugger <sylvain.gugger@gmail.com>
-
- 27 Jul, 2021 3 commits
-
-
Sylvain Gugger authored
-
Stefan Schweter authored
* readme: fix retrieval of vocab size for flax clm example * examples: fix flax clm example when using training/evaluation files
-
cchen-dialpad authored
* set max_length and num_beams only when non None * fix instance variables * fix code style
-
- 26 Jul, 2021 9 commits
-
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Nicolas Patry authored
* Better heuristic for token-classification pipeline. Relooking at the problem makes thing actually much simpler, when we look at ids from a tokenizer, we have no way in **general** to recover if some substring is part of a word or not. However, within the pipeline, with offsets we still have access to the original string, so we can simply look if previous character (if it exists) of a token, is actually a space. This will obviously be wrong for tokenizers that contain spaces within tokens, tokenizers where offsets include spaces too (Don't think there are a lot). This heuristic hopefully is fully bc and still can handle non-word based tokenizers. * Updating test with real values. * We still need the older "correct" heuristic to prevent fusing punctuation. * Adding a real warning when important.
-
Matt authored
* Add new multiple-choice example, remove old one
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Add possibility to ignore imports in test_fecther * Style
-
Sylvain Gugger authored
-