- 15 Oct, 2025 4 commits
- 21 Sep, 2025 1 commit
-
-
kaixuanliu authored
Signed-off-by:Liu, Kaixuan <kaixuan.liu@intel.com>
-
- 26 Aug, 2025 1 commit
-
-
Janna authored
* add AIME tasks * standardize the repeats * fix task naming * aime25 only has test set * edit readme * add utils * standardize * fix case sensitivity * repeat once * lint * more linting * lint huggingface.py
-
- 13 Aug, 2025 1 commit
-
-
Xinhe Shi authored
-
- 23 Jul, 2025 2 commits
-
-
Baber Abbasi authored
* Fix: pin datasets < 4.0 * fix * update type hints in HF * fix hellaswag path
-
Avelina Asada Hadji-Kyriacou authored
* added support for additional chat template arguments * use `enable_thinking` * add wrap logging function * add `chat_template_args` back to HF --------- Co-authored-by:Baber <baber@hey.com>
-
- 16 Jul, 2025 1 commit
-
-
Baber Abbasi authored
* feat: add postprocessing for generated text to strip stop sequences and thinking tokens * nit * fix: trim leading whitespace after stripping thinking tokens from generation * feat: add think_end_token to model_args * nit * nit * nit * add to readme * nit
-
- 14 Jul, 2025 1 commit
-
-
Avelina Asada Hadji-Kyriacou authored
-
- 03 Jul, 2025 1 commit
-
-
Ankush authored
* fix(hf-gguf): skip gguf_file if external tokenizer is provided * docs(readme): add instructions for evaluating GGUF models with Hugging Face backend
-
- 30 Jun, 2025 1 commit
-
-
Baber Abbasi authored
* Try fixing issue 3026 which is caused by the quantization_config argument introduced in Commit 758c5ed8 . The argument is in Dict type, but for a GPTQ quantized model, it has a conflict with the huggingface interface which expects QuantizationConfigMixin type. Current solution is removing quantization_config argument in HFLM._create_model() of lm_eval/models/huggingface.py. Require further modification to restore the functionality provided by the previous commit. * wrap quantization_config in AutoQuantizationConfig * handle quantization config not dict * wrap quantization_config in AutoQuantizationConfig if dict --------- Co-authored-by:
shanhx2000 <hs359@duke.edu>
-
- 25 Jun, 2025 1 commit
-
-
Younes B authored
* add subfolder * lint * change it to empty string * fix typehints --------- Co-authored-by:Baber <baber@hey.com>
-
- 02 Jun, 2025 1 commit
-
-
Yury Sulsky authored
-
- 23 May, 2025 1 commit
-
-
Ameya Godbole authored
* FIX error due to grouping queries with different continuation length Make Collator choose query with the longest continuation as the candidate for generation * use max for key selection * added comments explaining variable cont length (identical ctx+cont[:-1]) --------- Co-authored-by:Baber <baber@hey.com>
-
- 18 Apr, 2025 1 commit
-
-
Avelina9X authored
* Added softmax_dtype argument to coerce log_softmax computations * move softmax_dtype --------- Co-authored-by:Baber <baber@hey.com>
-
- 16 Apr, 2025 1 commit
-
-
Baber Abbasi authored
* fix resolve_hf_chat_template version * pre-commit
-
- 15 Apr, 2025 1 commit
-
-
Jerry Zhang authored
* Add support for quantization_config Summary: Previously quantization_config is ignored, so torchao quantized models are not supported, this PR adds that. Test Plan: lm_eval --model hf --model_args pretrained=jerryzh168/gemma3-int4wo --tasks hellaswag --device cuda:0 --batch_size 8 Reviewers: Subscribers: Tasks: Tags: * quantization_config is optional
-
- 11 Mar, 2025 1 commit
-
-
Baber Abbasi authored
-
- 21 Feb, 2025 1 commit
-
-
Lintang Sutawika authored
* changed source of eval_logger * allow eval_logger to be set from args * removed verbosity arg from non-main methods * fix logging * pre-commit * set verbosity in eval logger * replace utils.eval_logger * fix logging in main * add logging to docs * add logging message * nit * add logging to docs * refactor setup_logging to utils --------- Co-authored-by:Baber <baber@hey.com>
-
- 19 Jan, 2025 1 commit
-
-
Baber Abbasi authored
* update pre-commit
-
- 15 Jan, 2025 1 commit
-
-
Baber Abbasi authored
* add assistant prefix * add arc_challenge from llama * nit * nit * nit * add assistant prefix * add mmlu_llama * nit * nit * Revert "nit" This reverts commit 6a97f8356237305e375212b966b30e8de59dd4bc. * fix regex bug * add assistant_prefix to vllm * add `Question:` * add mmlu_pro * add fewshot assistant_prefix * use `assistant_prefill` * typehints * nits * nits * add to docs * add readme
-
- 07 Jan, 2025 1 commit
-
-
CL-ModelCloud authored
* hf support load gguf file * code review * code review * code clean up * note about use_fast compat with gguf --------- Co-authored-by:Qubitium-ModelCloud <qubitium@modelcloud.ai>
-
- 19 Dec, 2024 1 commit
-
-
Baber Abbasi authored
* add warning for truncation
-
- 16 Dec, 2024 1 commit
-
-
Baber Abbasi authored
* batch all rolling token windows * nit * copy to vllm * fix max_length for `get_rolling_token_windows` * bugfix * bugfix * add type hints
-
- 30 Nov, 2024 1 commit
-
-
Baber Abbasi authored
* make utility function to handle `until` * fix text
-
- 16 Nov, 2024 1 commit
-
-
Baber Abbasi authored
* pre-commit update * update github actions * make logging less verbose * fix artifacts
-
- 11 Nov, 2024 2 commits
-
-
Baber Abbasi authored
-
Baber Abbasi authored
* batch commit * :Revert "batch commit" This reverts commit d859d1ca . * batch commit * checkout from main * checkout from main * checkout from main * checkout from main * checkout from main * cleanup * cleanup * cleanup * cleanup * cleanup * cleanup * cleanup * cleanup * cleanup * Chat template fix (#7) * cleanup * cleanup * cleanup * linting * fix tests * add ifeval install to new_task CI * Revert "add ifeval install to new_task CI" This reverts commit 1d19449bb7fbfa05d51e7cd20950475eae533bf1. * adds leaderboard tasks (#1) * adds leaderboard tasks * Delete lm_eval/tasks/leaderboard/leaderboard_chat_template.yaml * add readme * Delete lm_eval/tasks/leaderboard/mmlu_pro/mmlu_pro_chat_template.yaml * modify readme * fix bbh task * fix bbh salient task * modify the readme * Delete lm_eval/tasks/leaderboard/ifeval/README.md * Delete lm_eval/tasks/leaderboard/math/README.md * add leaderboard to the tasks repertory * add anouncment about new leaderbaord tasks * linting * Update README.md Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * installs ifeval dependency in new_task github workflow --------- Co-authored-by:
Nathan Habib <nathan.habib@huggingface.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * fix math parser * fix math parser * fix version * add warning about chat template --------- Co-authored-by:
Nathan Habib <nathan.habib@huggingface.co> Co-authored-by:
Nathan Habib <30601243+NathanHB@users.noreply.github.com> Co-authored-by:
Nathan Habib <nathan.habib@huggingface.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> Co-authored-by:
Nathan Habib <nathan.habib19@gmail.com>
-
- 07 Nov, 2024 1 commit
-
-
Baber Abbasi authored
* pass device_map other than auto for parallelize
-
- 31 Oct, 2024 1 commit
-
-
Qubitium-ModelCloud authored
* support gptqmodel * code opt * add gptqmodel option * Update huggingface.py * Update pyproject.toml * gptqmodel version upgraded to 1.0.6 * GPTQModel version upgraded to 1.0.8 * Update pyproject.toml * fix ruff-format error * add gptqmodel test * Update gptqmodel test model * skip cuda * python3.8 compatible * Update README.md * Update README.md --------- Co-authored-by:CL-ModelCloud <cl@modelcloud.ai>
-
- 22 Oct, 2024 1 commit
-
-
Leonid Sinev authored
* Replace generic exception classes with a more specific ones * rerun pre-commit to pass linter tests * Revert "rerun pre-commit to pass linter tests" This reverts commit 67f88ccf144469853217704520e613196042d859. * reduce repetitions in errors or so * Replace generic exception class with a more specific one
-
- 08 Oct, 2024 1 commit
-
-
Baber Abbasi authored
* switch conditional checks to `self.backend` * nit * nit * commit feedback * fix test; update precommit hooks * add escape hatch for custom self.AUTO_MODEL_CLASS * add escape hatch for custom self.AUTO_MODEL_CLASS * fix * move assertion * add logging messages * update AUTO_MODEL_CLASS behavior in _get_backend --------- Co-authored-by:haileyschoelkopf <hailey@eleuther.ai>
-
- 13 Sep, 2024 1 commit
-
-
Lintang Sutawika authored
* add WIP hf vlm class * add doc_to_image * add mmmu tasks * fix merge conflicts * add lintang's changes to hf_vlms.py * fix doc_to_image * added yaml_path for config-loading * revert * add line to process str type v * update * modeling cleanup * add aggregation for mmmu * rewrite MMMU processing code based on only MMMU authors' repo (doc_to_image still WIP) * implemented doc_to_image * update doc_to_image to accept list of features * update functions * readd image processed * update args process * bugfix for repeated images fed to model * push WIP loglikelihood code * commit most recent code (generative ; qwen2-vl testing) * preliminary image_token_id handling * small mmmu update: some qs have >4 mcqa options * push updated modeling code * use processor.apply_chat_template * add mathvista draft * nit * nit * ensure no footguns in text<>multimodal LM<>task incompatibility * add notification to readme regarding launch of prototype! * fix compatibility check * reorganize mmmu configs * chat_template=None * add interleave chat_template * add condition * add max_images; interleave=true * nit * testmini_mcq * nit * pass image string; convert img * add vllm * add init * vlm add multi attr * fixup * pass max images to vllm model init * nit * encoding to device * fix HFMultimodalLM.chat_template ? * add mmmu readme * remove erroneous prints * use HFMultimodalLM.chat_template ; restore tasks/__init__.py * add docstring for replace_placeholders in utils * fix `replace_placeholders`; set image_string=None * fix typo * cleanup + fix merge conflicts * update MMMU readme * del mathvista * add some sample scores * Update README.md * add log msg for image_string value --------- Co-authored-by:
haileyschoelkopf <hailey@eleuther.ai> Co-authored-by:
Baber Abbasi <baber@eleuther.ai> Co-authored-by:
Baber <baber@hey.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 04 Sep, 2024 1 commit
-
-
Baber Abbasi authored
* default chat template method fix * move chat_template to TemplateLM * remove hotfix * handle openai `chat_template` * Update lm_eval/api/model.py Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * add 'max_tokens' to gen_kwargs * pre-commit --------- Co-authored-by:
KonradSzafer <szafer.konrad@gmail.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 28 Aug, 2024 1 commit
-
-
Hailey Schoelkopf authored
* fix revision type * allow for None-input loglikelihood reqs to be cached * handle no remaining cache items * pre-commit * change cache_hook.add_partial(loglikelihood_rolling...) convention --------- Co-authored-by:Baber Abbasi <baber@eleuther.ai>
-
- 22 Aug, 2024 1 commit
-
-
Wessel Poelman authored
-
- 20 Aug, 2024 1 commit
-
-
KonradSzafer authored
* multiple chat template support * help doc update * add transformers link to docstring * model args update * comment update * statement simplification * simplified chat_template property * docs update * removed template arg from HFLM class * interface doc update * model guide update * interface doc update * reuse apply_chat_template variable * model guide refactor * interface doc update * removed old definition * last nits * last nits * last nits * better wording * last nits * Remove unnecessary Optional * Apply suggestions from code review Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com> * return variable rename --------- Co-authored-by:
Clémentine Fourrier <22726840+clefourrier@users.noreply.github.com> Co-authored-by:
Hailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
-
- 05 Aug, 2024 1 commit
-
-
Hailey Schoelkopf authored
-