- 13 Jun, 2022 10 commits
-
-
Daniel Stancl authored
* Initial commit * Make some fixes * Make PT model full forward pass * Drop TF & Flax implementation, fix copies etc * Add Flax model and update some corresponding stuff * Drop some TF things * Update config and flax local attn * Add encoder_attention_type to config * . * Update docs * Do some cleansing * Fix some issues -> make style; add some docs * Fix position_bias + mask addition + Update tests * Fix repo consistency * Fix model consistency by removing flax operation over attn_mask * [WIP] Add PT TGlobal LongT5 * . * [WIP] Add flax tglobal model * [WIP] Update flax model to use the right attention type in the encoder * Fix flax tglobal model forward pass * Make the use of global_relative_attention_bias * Add test suites for TGlobal model * Fix minor bugs, clean code * Fix pt-flax equivalence though not convinced with correctness * Fix LocalAttn implementation to match the original impl. + update READMEs * Few updates * Update: [Flax] improve large model init and loading #16148 * Add ckpt conversion script accoring to #16853 + handle torch device placement * Minor updates to conversion script. * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM * gpu support + dtype fix * Apply some suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * * Remove (de)parallelize stuff * Edit shape comments * Update README.md * make fix-copies * Remove caching logic for local & tglobal attention * Apply another batch of suggestions from code review * Add missing checkpoints * Format converting scripts * Drop (de)parallelize links from longT5 mdx * Fix converting script + revert config file change * Revert "Remove caching logic for local & tglobal attention" This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46. * Stash caching logic in Flax model * Make side relative bias used always * Drop caching logic in PT model * Return side bias as it was * Drop all remaining model parallel logic * Remove clamp statements * Move test files to the proper place * Update docs with new version of hf-doc-builder * Fix test imports * Make some minor improvements * Add missing checkpoints to docs * Make TGlobal model compatible with torch.onnx.export * Replace some np.ndarray with jnp.ndarray * Fix TGlobal for ONNX conversion + update docs * fix _make_global_fixed_block_ids and masked neg value * update flax model * style and quality * fix imports * remove load_tf_weights_in_longt5 from init and fix copies * add slow test for TGlobal model * typo fix * Drop obsolete is_parallelizable and one warning * Update __init__ files to fix repo-consistency * fix pipeline test * Fix some device placements * [wip]: Update tests -- need to generate summaries to update expected_summary * Fix quality * Update LongT5 model card * Update (slow) summarization tests * make style * rename checkpoitns * finish * fix flax tests Co-authored-by:
phungvanduy <pvduy23@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
patil-suraj <surajp815@gmail.com>
-
haohanchen-yagao authored
* Add FP16 supporot for sagemaker model parallel * minor fix * fix indentation * handle mix precision exception for smmp * minor fix * remove amp implementation on SMMP * remove redundant stuff * reformat trainer * restyling * reformat
-
Wang, Yi authored
* enable cpu distribution training using mpirun *command like * mpirun -n 2 python3 run_qa.py --no_cuda --xpu_backend ccl xxxx *MASTER_ADDR and MASTER_PORT should be set as env *export MASTER_ADDR=127.0.0.1 *export MASTER_PORT=29500 Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * fix according to the review comment Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * use accelerate logic for cpu distribution training to set "RANK","LOCAL_RANK","WORLD_SIZE" environment Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com>
-
Bram Vanroy authored
* allow scope from trainer arg * add ray_scope to training args * escape double quotes * make style && quality * attempt to solve doc style issues * splitting up URLs for style * make fixup * Update src/transformers/training_args.py Co-authored-by:
Antoni Baum <antoni.baum@protonmail.com> * make style Co-authored-by:
Antoni Baum <antoni.baum@protonmail.com>
-
Will Frey authored
I'm guessing that the intention was to have the `_no_split_modules` class attribute for `GPTNeoXPreTrainedModel` to be set to `["GPTNeoXLayer"]`, akin to how its set as `["GPTJBlock"]` for `GPTJPreTrainedModel`. If this is incorrect, please feel free to just close the PR. Thanks!
-
Sylvain Gugger authored
* Fix dtype getters * Proper fix for dtype getter * Style and commant * Always use last for consistency * Quality
-
Bram Vanroy authored
-
Saint authored
Co-authored-by:Saint <saint@st-mini.local>
-
Sijun He authored
* wip * rebase * all tests pass * rebase * ready for PR * address comments * fix styles * add require_torch to pipeline test * remove remote image to improve CI consistency * address comments; fix tf/flax tests * address comments; fix tf/flax tests * fix tests; add alias * repo consistency tests * Update src/transformers/pipelines/visual_question_answering.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * address comments * Update src/transformers/pipelines/visual_question_answering.py Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * merge * Update src/transformers/models/auto/modeling_auto.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * merge Co-authored-by:
Sijun He <sijunhe@Sijuns-MacBook-Pro.local> Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Ayush Mangal authored
-
- 10 Jun, 2022 17 commits
-
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Domenic Rosati authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* [BigBirdFlaxTests] Make tests slow * up * correct black with new version
-
Loubna Ben Allal authored
- use CodeParrot scores of v1.1 - change evaluation command to use accelerate
-
Simon Brandeis authored
* Raise RepoNotFoundError in case of 401 * Include changes from revert-17646-skip_repo_not_found * Add a comment *
馃拕 Code quality *馃挌 Update `get_from_cache` test *馃挌 Code quality & skip failing test -
Balaji authored
VisibleDeprecationWarning is addressed by specifying dtype=object when creating numpy array. Update code based on review feedback. Undo whitespace changes to tokenization_utils_base.py. Co-authored-by:I like data <ilikedata@nym.hush.com>
-
Sylvain Gugger authored
-
amyeroberts authored
-
Lysandre authored
-
Lysandre authored
-
dependabot[bot] authored
Bumps [cookiecutter](https://github.com/cookiecutter/cookiecutter) from 1.7.2 to 2.1.1. - [Release notes](https://github.com/cookiecutter/cookiecutter/releases) - [Changelog](https://github.com/cookiecutter/cookiecutter/blob/master/HISTORY.md) - [Commits](https://github.com/cookiecutter/cookiecutter/compare/1.7.2...2.1.1 ) --- updated-dependencies: - dependency-name: cookiecutter dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Alara Dirik authored
* enable crop_center method to handle (W, H, C) images * minor style and comment edits
-
Alara Dirik authored
* move clip image utils to image_utils.py * dont default to square images * fix typo, revert change to test file * edit convert_rgb comments
-
Sylvain Gugger authored
-
Martina Fumanelli authored
* Add Italian translation for autoclass_tutorial.mdx * Fix synthesis Co-authored-by:martina.fumanelli <martina.fumanelli@MBP-di-martinafumanelli.local>
-
- 09 Jun, 2022 13 commits
-
-
Stas Bekman authored
-
mrbean authored
* convert assertion to raised exception in debertav2 * change assert to raise exception in deberta * fix messages
-
Yih-Dar authored
* pre-build deepspeed Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Stas Bekman authored
* [modeling_utils] torch_dtype/auto fixes * add test * apply suggestions * add missing fallback * Renaming things * Use for else Co-authored-by:Sylvain Gugger <Sylvain.gugger@gmail.com>
-
Nicolas Patry authored
When we're preparing the tensors for CPU for postprocessing, we need to upgrade the `float16` to `float32` since CPUs don't have instructions for `[b]float16`.
-
Stas Bekman authored
-
Yih-Dar authored
* Fix very long job failure text in Slack report Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Nicolas Patry authored
* Adding `top_k` and `sort` arguments to `text-classification` pipeline. - Deprecate `return_all_scores` as `top_k` is more uniform with other pipelines, and a superset of what `return_all_scores` can do. BC is maintained though. `return_all_scores=True` -> `top_k=None` `return_all_scores=False` -> `top_k=1` - Using `top_k` will imply sorting the results, but using no argument will keep the results unsorted for backward compatibility. * Remove `sort`. * Fixing the test. * Remove bad doc.
-
Sylvain Gugger authored
-
amyeroberts authored
* Use shape_list to safely get shapes * Add relevant test * Tidy and add metrics * Resolve dynamic shaping issues and move test * Tidy up and all samples in batch * Formatting
-
regisss authored
-
regisss authored
* Add ONNX support for ResNet * Add ONNX test * make fix-copies
-
Younes Belkada authored
* adding template * update model * model update * update conf for debug model * update conversion * update conversion script * update conversion script * fix missing keys check * add tests to test the tokenizer in the local machine * Change variable name * add tests on xnli dataset * add more description * add descriptions + clearer code * clearer code * adding new tests + skipping few tests because of env problems * change comment * add dtype on the configuration * add test embeddings * add hardcoded test * fix dtype issue * adding torch.float16 to config * adding more metrics (min, max, mean) * add sum * now the test passes with almost equal * add files for conversion - test passes on cpu gpu * add final changes * cleaning code * add new args in the docstring * fix one liner function * remove macros * remove forward attention * clean up init funtion * add comments on the issue * rm scale mask softmax * do make style * fix dtype in init * fixing for loop on att probs * fix style with black * fix style + doc error * fix and debug CI errors (docs + style) * some updates - change new operations - finally add scaled softmax - added new args in the config * make use cache working * add changes - save sharded models - final changes on the modeling script * add changes - comment on alibi - add TODO on seq length * test commit - added a text to test the commit Co-authored-by:
thomasw21 <24695242+thomasw21@users.noreply.github.com> * final changes - attention mask change - generation works on BS176b Co-authored-by:
thomasw21 <24695242+thomasw21@users.noreply.github.com> * changes - model + conversion * move to correct dir * put , * fex fixes * fix tokenizer autodoc * fix minor CI issues * fix minor CI issues * fix minor CI issues * fix style issue * fix minor import issues * fix few issues * remove def main on the test * add require torch * replace decorator with 'with' * fix style * change to bloom * add quick fix tokenizer * fix tokenizer file * fix tokenizer - merge tests - small fixes * fix import issue * add bloom to readme * fix consistency * Update docs/source/en/model_doc/bloom.mdx Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Apply suggestions from code review fix comment issues on file headers Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix doc issue * small fix - modeling test * some changes - refactor some code - taking into account reviews - more tests should pass - removed pruning tests * remove useless division * more tests should pass * more tests should pass * more tests should pass * let's try this one -add alibi offset - remove all permutes to make the grad operations work - finger crossed * refactor - refactor code - style changes - add new threshold for test * major changes - change BLOOM to Bloom - add quick doc on bloom.mdx - move embeddings test on modeling test * modify readme * small fixes * small fix - better threshold for a test * remove old test file from fetcher * fix small typo * major change - change BloomLMHead to BloomForCausalLM * remove onnx config * major changes - refactor the code - remove asserts - change tol for test * make style * small change * adding a slow test + commenting old ones for now * make style * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * make style * fix duplicates * cleaning comments on config * clean a bit conversion file * refacor a bit modeling file * refactor tokenizer file * fix tokenization test issue * fix tokenization issue #2 * fix tokenization issue second try * fix test issue * make style + add suggestions * change test fetcher * try this one - slow tests should pass - finger crossed * possible final changes * make style * try fix padding side issue * fix side * fix padding issue * fix ko-readme * fix config auto * cleaning modeling file * keep bloom in caps in ko * update config docs * remove pretraining_pp * remove model parallel * update config - add correct config files * fix duplicates * fix fetcher * fix refactor issue - remove divide function * try to remove alibi * small fixes - fix alibi - remove seq length - refactor a bit the code * put correct values - fix bos and eos token ids * fix attention mask loop Co-authored-by:
thomasw21 <24695242+thomasw21@users.noreply.github.com> * small fixes: - remove skip bias add * small fixes - fix typo in readme - fix typos in config * small changes - remove a test - add reconstruction test - change config * small changes - change Scaled Softmax to BloomScaledSoftmax * small fixes - fix alibi dtype * major changes - removing explicit dtype when loading modules - fixing test args (torch_dtype=auto) - add dosctring * fix readmes * major changes - now bloom supports alibi shifting - refactor a bit the code - better test tolerance now * refactor a bit * refactor a bit * put correct name on test * change docstring * small changes - fix docstring modeling - fix test tolerance * fix small nit - take dtype from tensors in the conversion script * minor fix - fix mdx issue * minor fix - change config docstring * forward contrib credits from PR14084 * Apply suggestions from code review Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * apply modifications Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * resolve softmax upcast * Apply suggestions from code review Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Update src/transformers/models/bloom/modeling_bloom.py Co-authored-by:
Niklas Muennighoff <n.muennighoff@gmail.com> * final changes modeling Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Merge commit 'd156898f ' * merge commit * Apply suggestions from code review Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * apply suggestions Apply suggestions from Stas comments Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Fix gradient checkpointing Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * add slow but exact * add accelerate compatibility Co-authored-by:
Nicolas Patry <Narsil@users.noreply.github.com> * forward contrib credits Co-authored-by:
thomasw21 <thomasw21@users.noreply.github.com> Co-authored-by:
sgugger <sgugger@users.noreply.github.com> Co-authored-by:
patrickvonplaten <patrickvonplaten@users.noreply.github.com> Co-authored-by:
Niklas Muennighoff <n.muennighoff@gmail.com> Co-authored-by:
LysandreJik <LysandreJik@users.noreply.github.com> * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix torch device on tests * make style * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix nits Co-authored-by: patrickvonplaten<patrickvonplaten@users.noreply.github.com> * remove final nits * fix doc - add more details on the doc - add links to checkpoints * Update src/transformers/__init__.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/bloom/modeling_bloom.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * apply suggestions Co-authored-by:
sgugger <sgugger@users.noreply.github.com> * put test torchscript to false * Update src/transformers/models/bloom/modeling_bloom.py Co-authored-by:
justheuristic <justheuristic@gmail.com> * fix alibi - create alibi only once * add small doc * make quality * replace torch.nn * remove token type emb * fix fused op + output bias * add fused op - now can control fused operation from config * remove fused op * make quality * small changes - remove unsed args on config - removed bias gelu file - make the model torchscriptable - add torchscript slow tests * Update src/transformers/models/bloom/modeling_bloom.py * fix slow * make style * add accelerate support * add bloom to deepspeed tests * minor changes * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * minor change * slow tests pass * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update docs/source/en/model_doc/bloom.mdx Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * minor changes: - change docstring - add link to paper Co-authored-by:
Thomwolf <thomwolf@gmail.com> Co-authored-by:
Thomas Wolf <thomas@huggingface.co> Co-authored-by:
thomasw21 <24695242+thomasw21@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
sIncerass <sheng.s@berkeley.edu> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Niklas Muennighoff <n.muennighoff@gmail.com> Co-authored-by:
Nicolas Patry <Narsil@users.noreply.github.com> Co-authored-by:
thomasw21 <thomasw21@users.noreply.github.com> Co-authored-by:
sgugger <sgugger@users.noreply.github.com> Co-authored-by:
patrickvonplaten <patrickvonplaten@users.noreply.github.com> Co-authored-by:
LysandreJik <LysandreJik@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
justheuristic <justheuristic@gmail.com> Co-authored-by:
Stas Bekman <stas@stason.org>
-