- 16 Aug, 2023 5 commits
-
-
amyeroberts authored
* Add copied from statements for image processors * Move out rescale and normalize to base image processor * Remove rescale and normalize from vit (post rebase) * Update docstrings and tidy up * PR comments * Add input_data_format as preprocess argument * Resolve tests and tidy up * Remove num_channels argument * Update doc strings -> default ints not in code formatting
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Marc Sun authored
fix test
-
Joao Gante authored
-
Joao Gante authored
-
- 15 Aug, 2023 1 commit
-
-
Zach Mueller authored
* Make training args fully immutable * Working tests, PyTorch * In test_trainer * during testing * Use proper dataclass way * Fix test * Another one * Fix tf * Lingering slow * Exception * Clean
-
- 14 Aug, 2023 1 commit
-
-
amyeroberts authored
* Remove softmax for EfficientNet * Update integration test values * Fix up
-
- 11 Aug, 2023 4 commits
-
-
amyeroberts authored
Make CI less brittle
-
amyeroberts authored
* Enable specifying input data format - overriding inferring * Add tests
-
Joao Gante authored
-
amyeroberts authored
* Refactor image processor test mixin - Move test_call_numpy, test_call_pytorch, test_call_pil to mixin - Rename mixin to reflect handling of logic more than saving - Add prepare_image_inputs, expected_image_outputs for tests * Fix for oneformer
-
- 10 Aug, 2023 3 commits
-
-
Marc Sun authored
* GTPQ integration * Add tests for gptq * support for more quantization model * fix style * typo * fix method * Update src/transformers/modeling_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add dataclass and fix quantization_method * fix doc * Update tests/quantization/gptq/test_gptq.py Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Apply suggestions from code review Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * modify dataclass * add gtpqconfig import * fix typo * fix tests * remove dataset as req arg * remove tokenizer import * add offload cpu quantization test * fix check dataset * modify dockerfile * protect trainer * style * test for config * add more log * overwrite torch_dtype * draft doc * modify quantization_config docstring * fix class name in docstring * Apply suggestions from code review Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * more warning * fix 8bit kwargs tests * peft compatibility * remove var * fix is_gptq_quantized * remove is_gptq_quantized * fix wrap * Update src/transformers/modeling_utils.py Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * add exllama * skip test * overwrite float16 * style * fix skip test * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix docsting formatting * add doc * better test --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
-
Joao Gante authored
-
Joao Gante authored
* strict gen config save; Add tests * add note that the warning will be an exception in v4.34
-
- 09 Aug, 2023 3 commits
-
-
amyeroberts authored
-
hukuda222 authored
* aligned sample_beam specs with beam_search * pull origin main * Revert "pull origin main" This reverts commit 06d356f1137bb52272e120a03636598c44449cf3. * update test_utils.py * fix format * remove comment --------- Co-authored-by:Shogo Fujita <shogo.fujita@legalontech.jp>
-
Yoach Lacombe authored
* update bark generation configs for more coherent parameter * make style * update bark hub repo
-
- 08 Aug, 2023 4 commits
-
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sanchit Gandhi authored
* [ASR Pipeline] Clarify return timestamps * fix indentation * fix ctc check * fix ctc error message! * fix test * fix other test * add new tests * final comment
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Matthew Hoffman authored
* Register ModelOutput subclasses as supported torch.utils._pytree nodes Fixes #25357 where DDP with static_graph=True does not sync gradients when calling backward() over tensors contained in ModelOutput subclasses * Add test for torch pytree ModelOutput serialization and deserialization
-
- 07 Aug, 2023 3 commits
-
-
Pedro Lira authored
* Add mask2former fp16 support * Clear consistency/quality issues * Fix consistency/quality (2) * Add integration test for mask2former (fp16 case) * Fix code quality * Add integration test for maskformer (fp16 case) * Add integration test for oneformer (fp16 case) * Remove slow decorator from fp16 tests * Fix lint * Remove usage of full inference and value checks for fp16 * Temporarily comment slow for {mask, mask2, one}former * Add fp16 support to oneformer * Revert "Temporarily comment slow for {mask, mask2, one}former" This reverts commit e5371edabd301cf56079def0421a0a87df307cb0. * Remove dtype conversion noop -
Sylvain Gugger authored
* First draft * Deal with progress bars * Update src/transformers/utils/hub.py Co-authored-by:
Lucain <lucainp@gmail.com> * Address review comments * Forgot one * Pin hf_hub * Add argument for push all and fix tests * Fix tests * Address review comments --------- Co-authored-by:
Lucain <lucainp@gmail.com>
-
Yih-Dar authored
* fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 06 Aug, 2023 1 commit
-
-
Guillaume "Vermeille" Sanchez authored
-
- 04 Aug, 2023 4 commits
-
-
Yih-Dar authored
* temp * update * update * update * small dim * small dim * small dim * fix * update * fix * fix * fix * fix * fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
* Document check copies better and add tests * Include header in check for copies * Manual fixes * Try autofix * Fixes * Clean tests * Finalize doc * Remove debug print * More fixes
-
Sylvain Gugger authored
* Deal better with nested configs * Fixes * More fixes * Fix last test * Clean up existing configs * Remove hack in MPT Config * Update src/transformers/configuration_utils.py Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * Fix setting a nested config via dict in the kwargs * Adapt common test * Add test for nested config load with dict --------- Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
-
Sylvain Gugger authored
-
- 03 Aug, 2023 3 commits
-
-
Roland Szabo authored
* Add timeout parameter to load_image function. * Remove line. * Reformat code Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Add parameter to docs. --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Yoach Lacombe authored
* add generate method to SpeechT5ForTextToSpeech * update speecht5forTTS docstrings * Remove defaults to None in generate docstrings Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
amyeroberts authored
* Update InstructBLIP values Note: the tests are not independent. Running the test independentely produces different logits compared to running all the integration tests * Update test values after rescale update * Remove left over commented out code * Revert to previous rescaling logic * Update rescale tests
-
- 02 Aug, 2023 5 commits
-
-
Yih-Dar authored
* CI with layers=2 --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Patrick von Platen authored
* [MMS] Fix mms * [MMS] Fix mms * fix mms loading * Apply suggestions from code review * make style * Update tests/models/wav2vec2/test_modeling_wav2vec2.py
-
Yupeng Jia authored
* Update modeling_deformable_detr.py Fix bugs for two stage training * Update modeling_deformable_detr.py * Add test_two_stage_training to DeformableDetrModelTest --------- Co-authored-by:yupeng.jia <yupeng.jia@momenta.ai>
-
amyeroberts authored
Rescale tests - cast to float after rescaling to reflect #25229
-
YQ authored
* add test for `get_keys_to_not_convert` * add minimum patch to keep mpt lm_head from 8bit quantization * add reivsion to
-
- 01 Aug, 2023 1 commit
-
-
Younes Belkada authored
* add `require_bitsandbytes` on MPT integration tests * add it on mpt as well
-
- 31 Jul, 2023 2 commits
-
-
Yih-Dar authored
* update tiny_model_summary.json * update * update * update --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-