- 13 Sep, 2022 1 commit
-
-
Rahul A R authored
* fixed bug which caused overwrite_cache to always be True (#18967). * reformatting changes
-
- 09 Sep, 2022 1 commit
-
-
Rafa艂 Jankowski authored
* NeptuneCallback improvements * After review suggestions and deduplication of initial run * Added volatile checkpoints support due to missing post-rebase commit * Update README per review comments - Remove list formatting - Correct Neptune docs link Co-authored-by:Sabine <sabine.nyholm@neptune.ai>
-
- 07 Sep, 2022 1 commit
-
-
Nicholas Broad authored
* add accelerator.end_training() Some trackers need this to end their runs. * fixup and quality * add space * add space again ?!?
-
- 06 Sep, 2022 1 commit
-
-
arun99481 authored
Co-authored-by:Arun Rajaram <arunrajaram@Aruns-MacBook-Pro.local>
-
- 01 Sep, 2022 1 commit
-
-
Sylvain Gugger authored
-
- 25 Aug, 2022 1 commit
-
-
Rahul A R authored
-
- 24 Aug, 2022 1 commit
-
-
Rahul A R authored
* fixed incorrect param to hasattr * simplified condition checks * code cleanup
-
- 22 Aug, 2022 1 commit
-
-
Atharva Ingle authored
-
- 18 Aug, 2022 2 commits
-
-
Atharva Ingle authored
* `model.tie_weights()` should be applied after `accelerator.prepare` Weight tying should be done after the model has been moved to XLA device as mentioned on PyTorch/XLA Troubleshooting guide [here](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks) * format code
-
Zachary Mueller authored
-
- 17 Aug, 2022 1 commit
-
-
Stefan Schweter authored
* examples: add Bloom support for token classification (FLAX, PyTorch and TensorFlow) * examples: remove support for Bloom in token classication (FLAX and TensorFlow currently have no support for it)
-
- 16 Aug, 2022 1 commit
-
-
zhoutang776 authored
* Update run_translation_no_trainer.py found an error in selecting `no_decay` parameters and some small modifications when the user continues to train from a checkpoint * fixs `no_decay` and `resume_step` issue 1. change `no_decay` list 2. if use continue to train their model from provided checkpoint, the `resume_step` will not be initialized properly if `args.gradient_accumulation_steps != 1`
-
- 08 Aug, 2022 3 commits
-
-
Rasmus Arpe Fogh Jensen authored
* Added accelerate gradient accumulation wrapper to run_image_classification_no_trainer.py example script * make fixup changes * PR comments * changed input to Acceletor based on PR comment, ran make fixup * Added comment explaining the sync_gradients statement * Fixed lr scheduler max steps * Changed run_clm_no_trainer.py script to use accelerate gradient accum wrapper * Fixed all scripts except wav2vec2 pretraining to use accelerate gradient accum wrapper * Added accelerate gradient accum wrapper for wav2vec2_pretraining_no_trainer.py script * make fixup and lr_scheduler step inserted back into run_qa_beam_search_no_trainer.py * removed changes to run_wav2vec2_pretraining_no_trainer.py script and fixed using wrong constant in qa_beam_search_no_trainer.py script
-
Sylvain Gugger authored
* Fix compatibility with 1.12 * Remove pin from examples requirements * Update torch scatter version * Fix compatibility with 1.12 * Remove pin from examples requirements * Update torch scatter version * fix torch.onnx.symbolic_opset12 import * Reject bad version Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
regisss authored
-
- 06 Aug, 2022 2 commits
-
-
Julien Chaumond authored
* zero chance anyone's using that constant no? * `transformers-cli login` => `huggingface-cli login` * `transformers-cli repo create` => `huggingface-cli repo create` * `make style`
-
Julien Chaumond authored
* Delete valohai.yaml * NLP => ML * typo * website supports https * datasets * 60k + modalities * unrelated link fixing for accelerate * Ok those links were actually broken * Fix link * Make `AutoTokenizer` auto-link * wording tweak * add at least one non-nlp task
-
- 04 Aug, 2022 2 commits
-
-
Kian Sierra McGettigan authored
* swag_no_trainer updated for with gather_metrics * Removed unused variable samples_seen * updated examples with gather_for_metrics
-
Kian Sierra McGettigan authored
* swag_no_trainer updated for with gather_metrics * Removed unused variable samples_seen
-
- 03 Aug, 2022 1 commit
-
-
Ritik Nandwal authored
* Update no_trainer script for image-classification * Update no_trainer scripts for language-modeling examples * Remove unused variable * Removing truncation from losses array for language modeling examples
-
- 02 Aug, 2022 1 commit
-
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 01 Aug, 2022 3 commits
-
-
Sylvain Gugger authored
* Fix ROUGE add example check and update README * Stay consistent in values
-
Ogundepo Odunayo authored
-
atturaioe authored
* Migrate metric to Evaluate in pytorch examples * Remove unused imports
-
- 29 Jul, 2022 1 commit
-
-
Sylvain Gugger authored
* Preliminary work on tokenizers * Quality + fix tests * Treat processors * Fix pad * Remove all uses of in tests, docs and examples * Replace all as_target_tokenizer * Fix tests * Fix quality * Update examples/flax/image-captioning/run_image_captioning_flax.py Co-authored-by:
amyeroberts <amy@huggingface.co> * Style Co-authored-by:
amyeroberts <amy@huggingface.co>
-
- 27 Jul, 2022 1 commit
-
-
Lysandre authored
-
- 21 Jul, 2022 1 commit
-
-
Zachary Mueller authored
* Fix all tests
-
- 18 Jul, 2022 2 commits
-
-
John Giorgi authored
-
John Giorgi authored
-
- 13 Jul, 2022 1 commit
-
-
John Giorgi authored
* Add summarization name mapping for MultiNews * Add summarization name mapping for MultiNews
-
- 11 Jul, 2022 1 commit
-
-
Yulv-git authored
* Fix some typos. Signed-off-by:
Yulv-git <yulvchi@qq.com> * Fix typo. Signed-off-by:
Yulv-git <yulvchi@qq.com> * make fixup.
-
- 06 Jul, 2022 1 commit
-
-
ADAning authored
* Add ALL_LAYERNORM_LAYERS for LayerNorm * fix bug of appending layer norm
-
- 29 Jun, 2022 1 commit
-
-
Zachary Mueller authored
* Fix all is_torch_tpu_available
-
- 28 Jun, 2022 1 commit
-
-
Sylvain Gugger authored
-
- 23 Jun, 2022 2 commits
-
-
Zachary Mueller authored
Properly calculate the total train iterations and recalculate num epochs in no_trainer scripts (#17856)
-
Zachary Mueller authored
* Adjust test arguments and use a new example test
-
- 22 Jun, 2022 1 commit
-
-
Eran Hirsch authored
Add logits_processor parameter, used by `generate`, to `Seq2SeqTrainer` methods `evaluate` and `predict` (#17805) * Add logits_processor parameter, used by `generate`, to `Seq2SeqTrainer` methods `evaluate` and `predict` * Add all generate parameters to `Seq2SeqTrainer`, and also to `QuestionAnsweringSeq2SeqTrainer` which overrides it * Remove `self._num_beams` from trainer classes * - Run fixup - Fix "Constraint" not exposed - Fix synced_gpus to actually read from param * Use kwargs * Copy kwargs before making changes to it * Fix style issues unused imports
-
- 16 Jun, 2022 1 commit
-
-
Sylvain Gugger authored
-
- 15 Jun, 2022 1 commit
-
-
Jeff Rasley authored
-
- 07 Jun, 2022 1 commit
-
-
Sylvain Gugger authored
* Add examples telemetry * Alternative approach * Add to all other examples * Add to templates as well * Put framework separately * Same for TensorFlow
-