- 21 Nov, 2022 9 commits
-
-
Joao Gante authored
-
Matthijs Hollemans authored
* add model files etc for MobileNetV2 rename files for MobileNetV1 initial implementation of MobileNetV1 fix conversion script cleanup write docs tweaks fix conversion script extract hidden states fix test cases make fixup fixup it all remove main from doc link fixes fix tests fix up use google org fix weird assert * fixup * use google organization for checkpoints
-
Raj Rajhans authored
-
Younes Belkada authored
* run slow test on GPU * remove unnecessary device assignment * use `torch_device` instead
-
Ali Hassani authored
* Add LayerScale to NAT/DiNAT. Completely dropped the ball on LayerScale in the original PR (#20219). This is just an optional argument in both models, and is only activated for larger variants in order to provide training stability. * Add LayerScale to NAT/DiNAT. Minor error fixed. Co-authored-by:Ali Hassani <ahassanijr@gmail.com>
-
Ian C authored
* Update _toctree and clone original content * Translate first three sections * Add more translated chapters. Only 3 more left. * Finish translation * Run style from doc-builder * Address recommended changes from reviewer
-
BFSS authored
* zh quicktour(#20095) * add zh to doc workflow * remove untranslation from toctree Co-authored-by:BeifangSusu <BeifangSusu@bfss.com>
-
Joao Gante authored
Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Yih-Dar authored
* fix device issue Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 18 Nov, 2022 15 commits
-
-
Steven Liu authored
-
Ali Hassani authored
* Add DiNAT * Adds DiNAT + tests * Minor fixes * Added HF model * Add natten to dependencies. * Cleanup * Minor fixup * Reformat * Optional NATTEN import. * Reformat & add doc to _toctree * Reformat (finally) * Dummy objects for DiNAT * Add NAT + minor changes Adds NAT as its own independent model + docs, tests Adds NATTEN to ext deps to ensure ci picks it up. * Remove natten from `all` and `dev-torch` deps, add manual pip install to ci tests * Minor fixes. * Fix READMEs. * Requested changes to docs + minor fixes. * Requested changes. * Add NAT/DiNAT tests to layoutlm_job * Correction to Dinat doc. * Requested changes.
-
Joao Gante authored
* future proof our tf code * parse tf versions
-
Steven Liu authored
* remove double brackets * oops get other bracket
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Zachary Mueller authored
-
Nicolas Patry authored
* [Proposal] Breaking change `zero-shot-object-detection` for improved consistency. This is a proposal to modify the output of `zero-shot-object-detection` to provide better alignment with other pipelines. The output is now strictly the same as `object-detection` whereas before it would output lists of lists. The name `candidate_labels` is used throughout for consistency with other `zero-shot` pipelines. The pipeline is changed to `ChunkPipeline` to support batching cleanly. This removes all the lists and list of lists shenanigans, it's now a matter of the base pipeline handling all this not this specific one. **Breaking change**: It did remove complex calls potentials `pipe(images = [image1, image2], text_queries=[candidates1, candidates2])` to support only `pipe([{"image": image1, "candidate_labels": candidates1}, {"image": image2, "candidate_labels": candidates2}])` when dealing with lists and/or datasets. We could keep them, but it will add a lot of complexity to the code base, since the pipeline is rather young, I'd rather break to keep the code simpler, but we can revert this. **Breaking change**: The name of the argument is now `image` instead of `images` since it expects by default only 1 image. This is revertable like the previous one. **Breaking change**: The types is now simplified and flattened: `pipe(inputs) == [{**object1}, {**object2}]` instead of the previous `pipe(inputs) == [[{**object1}, {**object1}], [{**object2}]]` Where the different instances would be grouped by candidate labels within lists. IMHO this is not really desirable, since it would output empty lists and is only adding superflous indirection compared to `zero-shot-object-detection`. It is relatively change free in terms of how the results, it does change computation however since now the batching is handled by the pipeline itself. It **did** change the results for the small models so there seems to be a real difference in how the models handle this. * Fixing the doctests. * Behind is_torch_available. -
atturaioe authored
* Add AnyPrecisionAdamW optimizer * Add optim_args argument to TrainingArgs * Add tests for AnyPrecisionOptimizer * Change AnyPrecisionAdam default params to float32 * Move default_anyprecision_kwargs in trainer test * Rename AnyPrecisionAdamW
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
amyeroberts authored
* Add padding transformation * Add in upstream changes * Update tests & docs * Code formatting tuples in docstring
-
Sanchit Gandhi authored
* [ASR Examples] Update README for seq2seq * add language info * add training results * re-word
-
Arthur authored
-
Arthur authored
* fix the doc to specify that add_prefix_space = False * add correct expected output
-
- 17 Nov, 2022 14 commits
-
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Younes Belkada authored
- simplifies the devce checking test
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
NielsRogge authored
* Add ResNetBackbone * Define channels and strides as property * Remove file * Add test for backbone * Update BackboneOutput class * Remove strides property * Fix docstring * Add backbones to SHOULD_HAVE_THEIR_OWN_PAGE * Fix auto mapping name * Add sanity check for out_features * Set stage names based on depths * Update to tuple Co-authored-by:Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
-
raghavanone authored
* Add docstrings for canine model * Update CanineForTokenClassification Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Wang, Yi authored
set the default cache_enable to True, aligned with the default value in pytorch cpu/cuda amp autocast (#20289) Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com>
-
Nicolas Patry authored
* Fixing the doctests failures. * Fixup.
-
Joao Gante authored
* move contrastive search test to slow
-
Joao Gante authored
* test hub tf callback * create repo before cloning it
-
amyeroberts authored
* Image transforms functionality used instead * Import torch * Import rather than copy * Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
-
Nicolas Patry authored
* Adding doctest for `object-detection` pipeline. * Removed nested_simplify.
-
Nicolas Patry authored
* Adding `zero-shot-object-detection` pipeline doctest. * Remove nested_simplify.
-
Younes Belkada authored
* add warning on 8-bit models - added tests - added wrapper * move to a private attribute - remove wrapper - changed `save_pretrained` method * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix suggestions Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Arthur authored
* update part of the doc * add temp values, fix part of the doc * add template outputs * add correct models and outputss * style * fixup
-
- 16 Nov, 2022 2 commits
-
-
Zachary Mueller authored
-
Saad Mahmud authored
* Update configuration_deformable_detr.py comment * Add DeformableDetrConfig to documentation_tests.txt
-