1. 24 Jul, 2025 1 commit
    • Baber's avatar
      types · 1f97a945
      Baber authored
      1f97a945
  2. 18 Jul, 2025 2 commits
  3. 16 Jul, 2025 1 commit
    • Baber Abbasi's avatar
      truncate thinking tags in generations (#3145) · 51ede33c
      Baber Abbasi authored
      * feat: add postprocessing for generated text to strip stop sequences and thinking tokens
      
      * nit
      
      * fix: trim leading whitespace after stripping thinking tokens from generation
      
      * feat: add think_end_token to model_args
      
      * nit
      
      * nit
      
      * nit
      
      * add to readme
      
      * nit
      51ede33c
  4. 15 Jul, 2025 1 commit
  5. 14 Jul, 2025 1 commit
  6. 06 Jul, 2025 1 commit
  7. 03 Jul, 2025 1 commit
    • Ankush's avatar
      Bugfix/hf tokenizer gguf override (#3098) · ff41a856
      Ankush authored
      * fix(hf-gguf): skip gguf_file if external tokenizer is provided
      
      * docs(readme): add instructions for evaluating GGUF models with Hugging Face backend
      ff41a856
  8. 30 Jun, 2025 1 commit
    • Baber Abbasi's avatar
      [HF] fix quantization config (#3039) · fea4d11d
      Baber Abbasi authored
      * Try fixing issue 3026 which is caused by the quantization_config argument introduced in Commit 758c5ed8
      
      .
      The argument is in Dict type, but for a GPTQ quantized model, it has a conflict with the huggingface interface which expects QuantizationConfigMixin type.
      Current solution is removing quantization_config argument in HFLM._create_model() of lm_eval/models/huggingface.py.
      Require further modification to restore the functionality provided by the previous commit.
      
      * wrap quantization_config in AutoQuantizationConfig
      
      * handle quantization config not dict
      
      * wrap quantization_config in AutoQuantizationConfig if dict
      
      ---------
      Co-authored-by: default avatarshanhx2000 <hs359@duke.edu>
      fea4d11d
  9. 25 Jun, 2025 2 commits
  10. 23 Jun, 2025 1 commit
    • NourFahmy's avatar
      Fix Anthropic API compatibility issues in chat completions (#3054) · 8bc46207
      NourFahmy authored
      
      
      * Fix Anthropic API compatibility issues in chat completions
      
      solves two important compatibility issues between the LM Eval Harness and Anthropic's API:
      
      1) The type field issue - Anthropic's Messages API doesn't accept the type field that other APIs might expect, that was previously included
      2) The stop sequences issue - Anthropic requires stop sequences to contain non-whitespace characters
      
      tested with most recent models from anthopic; claude-sonnet-4-0, claude-opus-4-0, resolved my local api errors
      
      * pacufy pre-commit
      
      * add type
      
      ---------
      Co-authored-by: default avatarBaber <baber@hey.com>
      8bc46207
  11. 08 Jun, 2025 1 commit
    • Baber Abbasi's avatar
      [longbench] fix metric calculation (#2983) · 147e9d61
      Baber Abbasi authored
      * use all answers
      
      * use middle truncation
      
      * maybe fix classification score
      
      * strip classification preds
      
      * [vllm] remove stop tokens post-hoc
      
      * strip all preds
      
      * pacify pre-commit
      
      * start on truncation utility
      
      * add to readme
      
      * add a footgun doc
      
      * fix newline in yaml templates
      
      * do not strip code_sim preds!
      
      * fix pre-commit config
      
      * fix instruction warning
      
      * add not to longbench readme
      147e9d61
  12. 03 Jun, 2025 1 commit
  13. 02 Jun, 2025 1 commit
  14. 26 May, 2025 1 commit
  15. 23 May, 2025 2 commits
  16. 21 May, 2025 3 commits
  17. 19 May, 2025 1 commit
  18. 15 May, 2025 1 commit
  19. 10 May, 2025 1 commit
  20. 09 May, 2025 1 commit
  21. 06 May, 2025 1 commit
  22. 18 Apr, 2025 1 commit
  23. 16 Apr, 2025 2 commits
  24. 15 Apr, 2025 1 commit
    • Jerry Zhang's avatar
      Add support for quantization_config (#2842) · 758c5ed8
      Jerry Zhang authored
      * Add support for quantization_config
      
      Summary:
      Previously quantization_config is ignored, so torchao quantized models are not supported,
      this PR adds that.
      
      Test Plan:
      lm_eval --model hf --model_args pretrained=jerryzh168/gemma3-int4wo --tasks hellaswag --device cuda:0 --batch_size 8
      
      Reviewers:
      
      Subscribers:
      
      Tasks:
      
      Tags:
      
      * quantization_config is optional
      758c5ed8
  25. 14 Apr, 2025 1 commit
  26. 04 Apr, 2025 1 commit
  27. 20 Mar, 2025 2 commits
  28. 18 Mar, 2025 1 commit
  29. 17 Mar, 2025 1 commit
  30. 14 Mar, 2025 2 commits
  31. 11 Mar, 2025 1 commit
  32. 04 Mar, 2025 1 commit