- 27 Feb, 2024 2 commits
-
-
Hailey Schoelkopf authored
Co-authored-by:Daniel Furman <dryanfurman@gmail.com>
-
Hailey Schoelkopf authored
Co-authored-by:lewtun <lewis.c.tunstall@gmail.com>
-
- 16 Jan, 2024 1 commit
-
-
haileyschoelkopf authored
-
- 15 Jan, 2024 2 commits
-
-
haileyschoelkopf authored
-
Hailey Schoelkopf authored
* add WIP device_map overrides * update handling outside of accelerate launcher * change .to(device) log to debug level * run linter
-
- 13 Jan, 2024 3 commits
-
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
- 11 Jan, 2024 7 commits
-
-
Hailey Schoelkopf authored
* fix incorrect lookback protections * bump generate_until task versions
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
- 10 Jan, 2024 7 commits
-
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
- 07 Jan, 2024 6 commits
-
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
daniel-furman authored
-
- 04 Jan, 2024 1 commit
-
-
Baber Abbasi authored
* copies max_length from huggingface * handle max_length properly * get tokens from inputs * substitute Collator for Reorderer * `batch=auto` if using data_parallel * nit * cleanup * update code comments * `ray.shutdown()` after calling method if data_parallel_size > 1 --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 02 Jan, 2024 2 commits
-
-
Stella Biderman authored
-
Baber Abbasi authored
* auto-batch requires len of iter * handle case when batch_size="auto:N"
-
- 27 Dec, 2023 2 commits
-
-
Baber Abbasi authored
* fix group * siqa: default.yml -> default.yaml * max_gen_toks -> self.max_gen_toks * add ids to task tests * fix siqa * fix gen_kwargs for openai-chat
-
Jaewoo Yang authored
-
- 23 Dec, 2023 1 commit
-
-
Baber Abbasi authored
* refactor dataloader * cleanup + add docs * change arg * renamed Collator and added testing * parametrized test for Collator * appease pre-commit * added edge case batch 0 (no batching) * fix typos --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 22 Dec, 2023 3 commits
-
-
-
Hailey Schoelkopf authored
* modularize HFLM code * pass through extra kwargs to AutoModel.from_pretrained call * remove explicit model_kwargs * rename gptq -> autogptq * fix tokenizer pad token errors * ensure model always respects device_map and autogptq's selected devices * add a _get_config helper fn * add mambaLMWrapper * add mamba extra * add mamba extra * fix conditional import * Fix botched merge commit * Remove beginning-of-file comment for consistency * Add docstring for mambaLM re: supported kwargs * Alphabetize extras * Update extras table * appease precommit * run precommit on mamba_lm
-
Zach Schillaci authored
* Add retry error handler * fixup! Add retry error handler * Move to utils.py * Run isort on utils.py * Catch multiple exceptions * Update LMs with exception handler * Fixes to anthropic retry handler * fix callback kwarg * Update textsynth.py * fix python 3.8 incompatibility * fix indenterror I introduced * placate linter? * Update on_exception_callback kwarg name * fixup! Merge branch 'main' into add-retry-error-handler * fixup! fixup! Merge branch 'main' into add-retry-error-handler * Merge conflicts are fun * Run pre-commit --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 21 Dec, 2023 2 commits
-
-
Anjor Kanekar authored
* remove tokenizer for openai chat completions * reordering function * linter * remove tiktoken import
-
Anjor Kanekar authored
* separate local flag * tokenizer_backend * import order
-
- 20 Dec, 2023 1 commit
-
-
Vicki Boykis authored
* LocalChatCompletionsLM add * clean up completions class * clean up completions class * update tokens * README * fix constructor * eos token * folding local-chat-completions into OpenAIChatCompletions * refactoring to include gen_kwargs as passable option * add todo on chat completion kwarg validation * Ruff and README fix * generalize to **kwargs * remove unnecessary kwargs * README and remove kwargs * README
-