- 28 Jun, 2023 2 commits
-
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 23 Jun, 2023 1 commit
-
-
Matt authored
* An end to accursed version-specific imports * No more K.is_keras_tensor() either * Update dependency tables * Use a cleaner call context function getter * Add a cap to <2.14 * Add cap to examples requirements too
-
- 14 Jun, 2023 1 commit
-
-
Sylvain Gugger authored
* Clean up old Accelerate checks * Put back imports
-
- 08 Jun, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 07 Jun, 2023 2 commits
-
-
Sylvain Gugger authored
-
Zachary Mueller authored
* Min accelerate * Also min version * Min accelerate * Also min version * To different minor version * Empty
-
- 01 Jun, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 31 May, 2023 2 commits
-
-
Zachary Mueller authored
* Upgrade safetensors * Second table
-
Sanchit Gandhi authored
* fix for ragged list * unpin numba * make style * np.object -> object * propagate changes to tokenizer as well * np.long -> "long" * revert tokenization changes * check with tokenization changes * list/tuple logic * catch numpy * catch else case * clean up * up * better check * trigger ci * Empty commit to trigger CI
-
- 23 May, 2023 1 commit
-
-
Nicolas Patry authored
* Making `safetensors` a core dependency. To be merged later, I'm creating the PR so we can try it out. * Update setup.py * Remove duplicates. * Even more redundant.
-
- 16 May, 2023 1 commit
-
-
Sylvain Gugger authored
* Add a test of the built release * Polish everything * Trigger CI
-
- 12 May, 2023 1 commit
-
-
Yih-Dar authored
* min. version for pytest * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 11 May, 2023 2 commits
-
-
Sylvain Gugger authored
-
Lysandre Debut authored
* Agents extras * Add to docs
-
- 10 May, 2023 1 commit
-
-
José Ángel Rey Liñares authored
* chore: allow protobuf 3.20.3 Allow latest bugfix release for protobuf (3.20.3) * chore: update auto-generated dependency table update auto-generated dependency table * run in subprocess * Apply suggestions from code review Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Apply suggestions --------- Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
- 09 May, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 08 May, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 04 May, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 03 May, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 20 Apr, 2023 1 commit
-
-
amyeroberts authored
* Pin optax version * Pin flax too * Fixup
-
- 18 Apr, 2023 1 commit
-
-
Zachary Mueller authored
-
- 17 Apr, 2023 1 commit
-
-
Zachary Mueller authored
* Use accelerate for device management * Add accelerate to setup Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 13 Apr, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 07 Apr, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 06 Apr, 2023 1 commit
-
-
Nicolas Patry authored
* Adding Llama FastTokenizer support. - Requires https://github.com/huggingface/tokenizers/pull/1183 version - Only support byte_fallback for llama, raise otherwise (safety net). - Lots of questions are special tokens How to test: ```python from transformers.convert_slow_tokenizer import convert_slow_tokenizer from transformers import AutoTokenizer from tokenizers import Tokenizer tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-7b") if False: new_tokenizer = Tokenizer.from_file("tok.json") else: new_tokenizer = convert_slow_tokenizer(tokenizer) new_tokenizer.save("tok.json") strings = [ "This is a test", "生活的真谛是", "生活的真谛是[MASK]。", # XXX: This one is problematic because of special tokens # "<s> Something something", ] for string in strings: encoded = tokenizer(string)["input_ids"] encoded2 = new_tokenizer.encode(string).ids assert encoded == encoded2, f"{encoded} != {encoded2}" decoded = tokenizer.decode(encoded) decoded2 = new_tokenizer.decode(encoded2) assert decoded.strip() == decoded2, f"{repr(decoded)} != {repr(decoded2)}" ``` The converter + some test script. The test script. Tmp save. Adding Fast tokenizer + tests. Adding the tokenization tests. Correct combination. Small fix. Fixing tests. Fixing with latest update. Rebased. fix copies + normalized added tokens + copies. Adding doc. TMP. Doc + split files. Doc. Versions + try import. Fix Camembert + warnings -> Error. Fix by ArthurZucker. Not a decorator. * Fixing comments. * Adding more to docstring. * Doc rewriting.
-
- 03 Apr, 2023 2 commits
-
-
Xuehai Pan authored
* [setup] migrate setup script to `pyproject.toml` * [setup] cleanup configurations * remove unused imports
-
Xuehai Pan authored
* [setup] drop deprecated `distutils` usage * drop deprecated `distutils.util.strtobool` usage * fix import order * reformat docstring by `doc-builder`
-
- 29 Mar, 2023 2 commits
-
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
- 24 Mar, 2023 2 commits
-
-
Joao Gante authored
-
Sylvain Gugger authored
* Pin tensorflow-text to go with tensorflow * Make it more convenient to pin TensorFlow * setup don't like f-strings
-
- 22 Mar, 2023 1 commit
-
-
Stas Bekman authored
* [deepspeed] offload + non-cpuadam optimizer exception doc * deps
-
- 21 Mar, 2023 2 commits
-
-
Ali Hassani authored
-
Yih-Dar authored
* time to say goodbye, torch 1.7 and 1.8 * clean up torch_int_div * clean up is_torch_less_than_1_8-9 * update --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 17 Mar, 2023 1 commit
-
-
Ali Hassani authored
* Add kernel size to NATTEN's QK arguments. The new NATTEN 0.14.5 supports PyTorch 2.0, but also adds an additional argument to the QK operation to allow optional RPBs. This ends up failing NATTEN tests. This commit adds NATTEN back to circleci and adds the arguments to get it working again. * Force NATTEN >= 0.14.5
-
- 14 Mar, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 02 Mar, 2023 1 commit
-
-
amyeroberts authored
* Use PyAV instead of Decord * Get frame indices * Fix number of frames * Update src/transformers/models/videomae/image_processing_videomae.py * Fix up * Fix copies * Update timesformer doctests * Update docstrings
-
- 16 Feb, 2023 1 commit
-
-
Sylvain Gugger authored
-
- 13 Feb, 2023 1 commit
-
-
Stas Bekman authored
* Update setup.py * suggestions
-