- 02 Jan, 2024 1 commit
-
-
Baber Abbasi authored
* auto-batch requires len of iter * handle case when batch_size="auto:N"
-
- 27 Dec, 2023 2 commits
-
-
Baber Abbasi authored
* fix group * siqa: default.yml -> default.yaml * max_gen_toks -> self.max_gen_toks * add ids to task tests * fix siqa * fix gen_kwargs for openai-chat
-
Jaewoo Yang authored
-
- 23 Dec, 2023 1 commit
-
-
Baber Abbasi authored
* refactor dataloader * cleanup + add docs * change arg * renamed Collator and added testing * parametrized test for Collator * appease pre-commit * added edge case batch 0 (no batching) * fix typos --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 22 Dec, 2023 3 commits
-
-
-
Hailey Schoelkopf authored
* modularize HFLM code * pass through extra kwargs to AutoModel.from_pretrained call * remove explicit model_kwargs * rename gptq -> autogptq * fix tokenizer pad token errors * ensure model always respects device_map and autogptq's selected devices * add a _get_config helper fn * add mambaLMWrapper * add mamba extra * add mamba extra * fix conditional import * Fix botched merge commit * Remove beginning-of-file comment for consistency * Add docstring for mambaLM re: supported kwargs * Alphabetize extras * Update extras table * appease precommit * run precommit on mamba_lm
-
Zach Schillaci authored
* Add retry error handler * fixup! Add retry error handler * Move to utils.py * Run isort on utils.py * Catch multiple exceptions * Update LMs with exception handler * Fixes to anthropic retry handler * fix callback kwarg * Update textsynth.py * fix python 3.8 incompatibility * fix indenterror I introduced * placate linter? * Update on_exception_callback kwarg name * fixup! Merge branch 'main' into add-retry-error-handler * fixup! fixup! Merge branch 'main' into add-retry-error-handler * Merge conflicts are fun * Run pre-commit --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 21 Dec, 2023 2 commits
-
-
Anjor Kanekar authored
* remove tokenizer for openai chat completions * reordering function * linter * remove tiktoken import
-
Anjor Kanekar authored
* separate local flag * tokenizer_backend * import order
-
- 20 Dec, 2023 2 commits
-
-
Vicki Boykis authored
* LocalChatCompletionsLM add * clean up completions class * clean up completions class * update tokens * README * fix constructor * eos token * folding local-chat-completions into OpenAIChatCompletions * refactoring to include gen_kwargs as passable option * add todo on chat completion kwarg validation * Ruff and README fix * generalize to **kwargs * remove unnecessary kwargs * README and remove kwargs * README
-
Baber Abbasi authored
* add ruff and isort. remove black and flake8 * remove unnecessary dependencies * remove dependency from table * change order * ran ruff * check 3.9 * exclude evaluator * update CI workflow * use ruff config in pyproject.toml * test * add isort rules to ruff * sort imports * import `make_table` * try stages for no-commit-to-branch * turn on mypy for pre-commit * test * test * test * change no-commit-to-branch to default * nits * fixed dependency
-
- 19 Dec, 2023 2 commits
-
-
Pasquale Minervini authored
* self.device in huggingface.py line 210 In huggingface.py line 210, self.device is str and does not have a "type" attribute * Update huggingface.py This handles both the case where `self.device` is a `torch.device` and a string * Update huggingface.py --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
Hailey Schoelkopf authored
-
- 16 Dec, 2023 2 commits
-
-
Baber Abbasi authored
* fixed syntactic nits * fix temperature and seed * fix logprobs * fixup merge
-
Baber Abbasi authored
-
- 15 Dec, 2023 2 commits
-
-
Vicki Boykis authored
* enabling OpenAI completions via gooseai * openai-completions and pin openai
-
Baber Abbasi authored
-
- 14 Dec, 2023 3 commits
-
-
NanoCode012 authored
* fix: passing max_length to vllm engine args * feat: add `max_model_len` * chore: lint
-
Yuliang Li authored
-
Hailey Schoelkopf authored
* modularize HFLM code * pass through extra kwargs to AutoModel.from_pretrained call * remove explicit model_kwargs * rename gptq -> autogptq * fix tokenizer pad token errors * ensure model always respects device_map and autogptq's selected devices * add a _get_config helper fn
-
- 12 Dec, 2023 2 commits
-
-
Hailey Schoelkopf authored
-
Hailey Schoelkopf authored
-
- 10 Dec, 2023 1 commit
-
-
baberabb authored
-
- 04 Dec, 2023 1 commit
-
-
baberabb authored
-
- 03 Dec, 2023 3 commits
- 29 Nov, 2023 9 commits
- 28 Nov, 2023 2 commits
-
-
baberabb authored
-
lintangsutawika authored
-
- 27 Nov, 2023 2 commits
-
-
baberabb authored
-
Hailey Schoelkopf authored
-