- 31 Jan, 2024 1 commit
-
-
Hailey Schoelkopf authored
* don't override do_sample if no value for it is passed * Update gen_kwargs override condition * Update huggingface.py * Update huggingface.py * run linters * silence an erroneous warning
-
- 28 Jan, 2024 1 commit
-
-
LSinev authored
* raise Exception, not a string Additional info https://peps.python.org/pep-0352/#exception-hierarchy-changes https://docs.python.org/3.8/tutorial/errors.html#raising-exceptions * Apply PEP8 recommendation to prefer isinstance "Object type comparisons should always use isinstance() instead of comparing types directly" https://peps.python.org/pep-0008/ * Remove dangerous default mutable values in arguments https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/dangerous-default-value.html * Format logging messages with fstring (not with format) Additional info https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/logging-format-interpolation.html There are also discussions about the speed of formatting while logging or some unintended code executions https://github.com/pylint-dev/pylint/issues/2395 https://stackoverflow.com/a/54368109 but at least one format (fstring one) will be used throughout the project * Specify utf-8 encoding for `open` explicitly If not specified, it may be supposed differently in different environments, OSes, and Python versions. See https://peps.python.org/pep-0597/ https://docs.python.org/3.11/library/locale.html#locale.getencoding https://docs.python.org/3.10/library/os.html#utf8-mode https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/unspecified-encoding.html Helps also if some code from English language tasks is taken as inspiration for tasks in non-English languages. * Use inline-ignoring comments to pass pre-commit instead of identity process https://flake8.pycqa.org/en/3.0.1/user/ignoring-errors.html#in-line-ignoring-errors https://www.flake8rules.com/rules/F841.html flake8 comments are supported by ruff: https://docs.astral.sh/ruff/linter/#error-suppression
-
- 26 Jan, 2024 1 commit
-
-
NoushNabi authored
* added intel optimum * added intel optimum in readme * modified intel optimum * modified intel optimum * modified intel optimum * modified install optimum * modified path of IR file * added openvino_device * added openvino_device2 * changed optimum-causal to openvino-causal * Update README.md * Update README.md * remove `lm_eval.base` import * update openvino-causal -> openvino ; pass device through super().__init__() * Update README.md * Add optimum to tests dependencies * apply pre-commit * fix so tests pass --------- Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> Co-authored-by:
haileyschoelkopf <hailey@eleuther.ai>
-
- 24 Jan, 2024 1 commit
-
-
Baber Abbasi authored
-
- 23 Jan, 2024 1 commit
-
-
Baber Abbasi authored
* manage default (greedy) gen_kwargs in vllm better * mirror HF `do_sample` * just need to set temp=0 for greedy
-
- 22 Jan, 2024 2 commits
-
-
Michael Goin authored
* Add `local-completions` support using OpenAI interface * Refactor oa_completion * Address tokenizer comments and change request chunks to batch size * Add warning message for tiktoken backend * fix formatting * fix whitespace * Update README.md --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
Hailey Schoelkopf authored
-
- 15 Jan, 2024 1 commit
-
-
Hailey Schoelkopf authored
* add WIP device_map overrides * update handling outside of accelerate launcher * change .to(device) log to debug level * run linter
-
- 11 Jan, 2024 1 commit
-
-
Hailey Schoelkopf authored
* fix incorrect lookback protections * bump generate_until task versions
-
- 04 Jan, 2024 1 commit
-
-
Baber Abbasi authored
* copies max_length from huggingface * handle max_length properly * get tokens from inputs * substitute Collator for Reorderer * `batch=auto` if using data_parallel * nit * cleanup * update code comments * `ray.shutdown()` after calling method if data_parallel_size > 1 --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 02 Jan, 2024 2 commits
-
-
Stella Biderman authored
-
Baber Abbasi authored
* auto-batch requires len of iter * handle case when batch_size="auto:N"
-
- 27 Dec, 2023 2 commits
-
-
Baber Abbasi authored
* fix group * siqa: default.yml -> default.yaml * max_gen_toks -> self.max_gen_toks * add ids to task tests * fix siqa * fix gen_kwargs for openai-chat
-
Jaewoo Yang authored
-
- 23 Dec, 2023 1 commit
-
-
Baber Abbasi authored
* refactor dataloader * cleanup + add docs * change arg * renamed Collator and added testing * parametrized test for Collator * appease pre-commit * added edge case batch 0 (no batching) * fix typos --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 22 Dec, 2023 3 commits
-
-
-
Hailey Schoelkopf authored
* modularize HFLM code * pass through extra kwargs to AutoModel.from_pretrained call * remove explicit model_kwargs * rename gptq -> autogptq * fix tokenizer pad token errors * ensure model always respects device_map and autogptq's selected devices * add a _get_config helper fn * add mambaLMWrapper * add mamba extra * add mamba extra * fix conditional import * Fix botched merge commit * Remove beginning-of-file comment for consistency * Add docstring for mambaLM re: supported kwargs * Alphabetize extras * Update extras table * appease precommit * run precommit on mamba_lm
-
Zach Schillaci authored
* Add retry error handler * fixup! Add retry error handler * Move to utils.py * Run isort on utils.py * Catch multiple exceptions * Update LMs with exception handler * Fixes to anthropic retry handler * fix callback kwarg * Update textsynth.py * fix python 3.8 incompatibility * fix indenterror I introduced * placate linter? * Update on_exception_callback kwarg name * fixup! Merge branch 'main' into add-retry-error-handler * fixup! fixup! Merge branch 'main' into add-retry-error-handler * Merge conflicts are fun * Run pre-commit --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 21 Dec, 2023 2 commits
-
-
Anjor Kanekar authored
* remove tokenizer for openai chat completions * reordering function * linter * remove tiktoken import
-
Anjor Kanekar authored
* separate local flag * tokenizer_backend * import order
-
- 20 Dec, 2023 2 commits
-
-
Vicki Boykis authored
* LocalChatCompletionsLM add * clean up completions class * clean up completions class * update tokens * README * fix constructor * eos token * folding local-chat-completions into OpenAIChatCompletions * refactoring to include gen_kwargs as passable option * add todo on chat completion kwarg validation * Ruff and README fix * generalize to **kwargs * remove unnecessary kwargs * README and remove kwargs * README
-
Baber Abbasi authored
* add ruff and isort. remove black and flake8 * remove unnecessary dependencies * remove dependency from table * change order * ran ruff * check 3.9 * exclude evaluator * update CI workflow * use ruff config in pyproject.toml * test * add isort rules to ruff * sort imports * import `make_table` * try stages for no-commit-to-branch * turn on mypy for pre-commit * test * test * test * change no-commit-to-branch to default * nits * fixed dependency
-
- 19 Dec, 2023 2 commits
-
-
Pasquale Minervini authored
* self.device in huggingface.py line 210 In huggingface.py line 210, self.device is str and does not have a "type" attribute * Update huggingface.py This handles both the case where `self.device` is a `torch.device` and a string * Update huggingface.py --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
Hailey Schoelkopf authored
-
- 16 Dec, 2023 2 commits
-
-
Baber Abbasi authored
* fixed syntactic nits * fix temperature and seed * fix logprobs * fixup merge
-
Baber Abbasi authored
-
- 15 Dec, 2023 2 commits
-
-
Vicki Boykis authored
* enabling OpenAI completions via gooseai * openai-completions and pin openai
-
Baber Abbasi authored
-
- 14 Dec, 2023 3 commits
-
-
NanoCode012 authored
* fix: passing max_length to vllm engine args * feat: add `max_model_len` * chore: lint
-
Yuliang Li authored
-
Hailey Schoelkopf authored
* modularize HFLM code * pass through extra kwargs to AutoModel.from_pretrained call * remove explicit model_kwargs * rename gptq -> autogptq * fix tokenizer pad token errors * ensure model always respects device_map and autogptq's selected devices * add a _get_config helper fn
-
- 12 Dec, 2023 2 commits
-
-
Hailey Schoelkopf authored
-
Hailey Schoelkopf authored
-
- 10 Dec, 2023 1 commit
-
-
baberabb authored
-
- 04 Dec, 2023 1 commit
-
-
baberabb authored
-
- 03 Dec, 2023 3 commits
- 29 Nov, 2023 2 commits