- 28 Jun, 2024 1 commit
-
-
Baber Abbasi authored
* add chat template * refactor token padding * nit * nit * check on failing test * check transformers version * remove transformers pin * add ids to test * nit * fixup * fix bos bug * nit * fixup! fix bos bug * increase tolerance for table test * don't detokenize vllm logprobs * Update lm_eval/models/utils.py Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * pre-commit run --all-files --------- Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 18 Jun, 2024 1 commit
-
-
LSinev authored
-
- 13 Jun, 2024 2 commits
-
-
Hailey Schoelkopf authored
* Update vllm_causallms.py * adjust --------- Co-authored-by:lintangsutawika <lintang@eleuther.ai>
-
Baber Abbasi authored
* `samples` is newline delimited * updated git and pre-commit * appease pre-commit * nit * Revert back for now * Revert for now --------- Co-authored-by:Lintang Sutawika <lintang@eleuther.ai>
-
- 12 Jun, 2024 1 commit
-
-
Nikita Lozhnikov authored
Fix bug where `self.max_tokens` was not set
-
- 11 Jun, 2024 1 commit
-
-
Hailey Schoelkopf authored
-
- 03 Jun, 2024 1 commit
-
-
KonradSzafer authored
* initial chat template * tokenizer attribute check * variable rename * interface update * system instruction * system inst default update * fewshot as multiturn * typing update * indent update * added comments * Adding a fewshot in a more readable way * linting * Moved apply chat template to LM * multiturn alternation fix * cache key update * apply chat template method fix * add system prompt hash to cache_key * tokenizer name property for cache_key * property name fix * linting backward compatibility fix * docs and errors update * add documentation on adding chat template compatibility to model_guide * fewshot as multiturn check fix * saving system inst and chat template in results * eval tracker update * docs update * Apply suggestions from code review Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> --------- Co-authored-by:
haileyschoelkopf <hailey@eleuther.ai> Co-authored-by:
Clémentine Fourrier <22726840+clefourrier@users.noreply.github.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 30 May, 2024 1 commit
-
-
Huazhong Ji authored
* [HFLM]Add support for Ascend NPU Co-authored-by:
jiaqiw09 <jiaqiw960714@gmail.com> Co-authored-by:
zhabuye <2947436155@qq.com> * bump accelerate dependency version to 0.26.0 for NPU compat. --------- Co-authored-by:
jiaqiw09 <jiaqiw960714@gmail.com> Co-authored-by:
zhabuye <2947436155@qq.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 28 May, 2024 1 commit
-
-
Michael Goin authored
* Reorder vllm imports in vllm_causallms.py * Update vllm_causallms.py
-
- 24 May, 2024 1 commit
-
-
Huazhong Ji authored
-
- 23 May, 2024 1 commit
-
-
Edward Gan authored
-
- 19 May, 2024 1 commit
-
-
Nick Doiron authored
* resize model embeddings * resize only * tokenizer help * load tokenizer before model * add comment and run precommit lint * Add log message Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> --------- Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 07 May, 2024 2 commits
-
-
Hailey Schoelkopf authored
* fix auto-batch size bug for seq2seq models * alphabetize task + group tables ; fix eval tracker bug * fix eval tracker bug
-
Hailey Schoelkopf authored
-
- 05 May, 2024 2 commits
-
-
ciaranby authored
-
kwrobel.eth authored
* remove echo parameter in OpenAI completions API * remove context length parameter doc string
-
- 03 May, 2024 1 commit
-
-
KonradSzafer authored
* evaluation tracker implementation * OVModelForCausalLM test fix * typo fix * moved methods args * multiple args in one flag * loggers moved to dedicated dir * improved filename sanitization
-
- 02 May, 2024 2 commits
-
-
Helena Kloosterman authored
* Add option to set OpenVINO config * Use utils.eval_logger for logging
-
bcicc authored
* vllm lora support * remove print * version check, rename lora kwarg
-
- 18 Apr, 2024 1 commit
-
-
Sergio Perez authored
-
- 16 Apr, 2024 2 commits
-
-
Michael Goin authored
* Add neuralmagic models for SparseML and DeepSparse * Update to latest and add test * Format * Fix list to List * Format * Add deepsparse/sparseml to automated testing * Update pyproject.toml * Update pyproject.toml * Update README * Fixes for dtype and device * Format * Fix test * Apply suggestions from code review Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * Address review comments! --------- Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
KonradSzafer authored
* added delta weights * removed debug * readme update * better error handling * autogptq warn * warn update * peft and delta error, explicitly deleting _model_delta * linter fix
-
- 05 Apr, 2024 1 commit
-
-
Seungwoo Ryu authored
* claude3 * supply for anthropic claude3 * supply for anthropic claude3 * anthropic config changes * add callback options on anthropic * line passed * claude3 tiny change * help anthropic installation * mention sysprompt / being careful with format in readme --------- Co-authored-by:haileyschoelkopf <hailey@eleuther.ai>
-
- 01 Apr, 2024 1 commit
-
-
Michael Goin authored
The OpenAI interface supports batch size as an argument to the completions API, but does not seem to support specification of this on the CLI i.e. `lm_eval --model openai-completions --batch_size 16 ...` because of a simple lack of str->int conversion. This is confirmed by my usage and stacktrace from running `OPENAI_API_KEY=dummy lm_eval --model local-completions --tasks gsm8k --batch_size 16 --model_args model=nm- testing/zephyr-beta-7b-gptq-g128,tokenizer_backend=huggingface,base_url=http://localhost:8000/v1`: ``` Traceback (most recent call last): File "/home/michael/venv/bin/lm_eval", line 8, in <module> sys.exit(cli_evaluate()) File "/home/michael/code/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate results = evaluator.simple_evaluate( File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper return fn(*args, **kwargs) File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 251, in simple_evaluate results = evaluate( File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper return fn(*args, **kwargs) File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 390, in evaluate resps = getattr(lm, reqtype)(cloned_reqs) File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 263, in generate_until list(sameuntil_chunks(re_ord.get_reordered(), self.batch_size)), File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 251, in sameuntil_chunks if len(ret) >= size or x[1] != lastuntil: TypeError: '>=' not supported between instances of 'int' and 'str' ```
-
- 27 Mar, 2024 1 commit
-
-
Hailey Schoelkopf authored
-
- 26 Mar, 2024 1 commit
-
-
Sergio Perez authored
* Integration of NeMo models into LM Evaluation Harness library * rename nemo model as nemo_lm * move nemo section in readme after hf section * use self.eot_token_id in get_until() * improve progress bar showing loglikelihood requests * data replication or tensor/pipeline replication working fine within one node * run pre-commit on modified files * check whether dependencies are installed * clarify usage of torchrun in README
-
- 25 Mar, 2024 2 commits
-
-
Lintang Sutawika authored
* fix on --task list * add fixes to tokeniation * differentiate encoding for seq2seq and decoder * return token setting * format for pre-commit * Seq2seq fix, pt2 (#1630) * getting model class only when defined * encode_pair handles None, add_special_tokens turned into dict with default value --------- Co-authored-by:achervyakov <77295913+artemorloff@users.noreply.github.com>
-
WoosungMyung authored
* peft Version Assertion * fix the linter issue
-
- 21 Mar, 2024 1 commit
-
-
Hailey Schoelkopf authored
-
- 20 Mar, 2024 1 commit
-
-
Hailey Schoelkopf authored
* make vllm use prefix_token_id ; have prefix_token_id be optional method to define * custom_prefix_token_id wasn't set if not passed
-
- 19 Mar, 2024 2 commits
-
-
achervyakov authored
-
Hailey Schoelkopf authored
This reverts commit b7923a84.
-
- 18 Mar, 2024 1 commit
-
-
kwrobel.eth authored
* use BOS token in loglikelihood * improve comments * add model arg * log prefix token id * log prefix token id * Update lm_eval/api/model.py Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * change name to prefix_token_id --------- Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 17 Mar, 2024 1 commit
-
-
Lintang Sutawika authored
* Differentiate _encode_pair setting for decoder and enc-dec models * tok_decode to not skip special token so that eos doen't become empty string * Update model.py * Update model.py * Update huggingface.py * Update lm_eval/models/huggingface.py Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * Update model.py --------- Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 13 Mar, 2024 1 commit
-
-
achervyakov authored
* add manual tqdm disabling management * add typing to all new args * apply precommit changes --------- Co-authored-by:haileyschoelkopf <hailey@eleuther.ai>
-
- 09 Mar, 2024 1 commit
-
-
Antoni Baum authored
* Add compatibility for vLLM's new Logprob object * Fix * Update lm_eval/models/vllm_causallms.py * fix format? * trailing whitespace --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 06 Mar, 2024 1 commit
-
-
Sungho Park authored
Update installation commands in openai_completions.py and contributing document and, update wandb_args description (#1536) * Update openai completions and docs/CONTRIBUTING.md * Update wandb args description * Update docs/interface.md --------- Co-authored-by:Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 03 Mar, 2024 1 commit
-
-
Baber Abbasi authored
* use `@ray.remote` with distributed vLLM * update versions * bugfix * unpin vllm * fix pre-commit * added version assertion error * Revert "added version assertion error" This reverts commit 8041e9b78e95eea9f4f4d0dc260115ba8698e9cc. * added version assertion for DP * expand DP note * add warning * nit * pin vllm * fix typos
-
- 01 Mar, 2024 2 commits
-
-
Hailey Schoelkopf authored
* add undistribute + use more_itertools * remove divide() util fn * add more_itertools as dependency
-
Hailey Schoelkopf authored
-