1. 07 Jul, 2025 7 commits
  2. 06 Jul, 2025 2 commits
  3. 05 Jul, 2025 5 commits
  4. 04 Jul, 2025 8 commits
  5. 03 Jul, 2025 6 commits
  6. 01 Jul, 2025 1 commit
  7. 30 Jun, 2025 9 commits
    • Baber's avatar
      add temlplateconfigs · 15d07121
      Baber authored
      15d07121
    • Baber's avatar
      update type hints · 3ba4e897
      Baber authored
      3ba4e897
    • Baber's avatar
      update type hints · 9b192374
      Baber authored
      9b192374
    • jinze's avatar
      FixBug: Align the Humaneval with official results for Llama-3.1-70B-Instruct (#3092) · a7ca0435
      jinze authored
      * Fix: Align the Humaneval dataset with official results
      
      Details:(1) modified the "doc_to_text" and "gen_prefix" in the "humaneval_instruct.yaml" file to make them the same as the Prompt in "meta-llama/Llama-3.1-70B-Instruct-evals".
      
      (2) Change r.rfind("```") to r.find("```"), so it can locate the first "```", not the last one.
      
      Results: Partially reproduced the official results: The result of LLaMA3.1-8B-Instruct is 66.5 (the official result is 72.6), and the result of LLaMA3.1-70B-Instruct is 80.5 (the official result is 80.5).
      
      Ref: PR#2650
      
      * add changelog and version
      
      * add changelog
      a7ca0435
    • Baber's avatar
      cb8dfe63
    • Baber's avatar
      add FewshotConfig · 108674ed
      Baber authored
      108674ed
    • Baber Abbasi's avatar
      [HF] fix quantization config (#3039) · fea4d11d
      Baber Abbasi authored
      * Try fixing issue 3026 which is caused by the quantization_config argument introduced in Commit 758c5ed8
      
      .
      The argument is in Dict type, but for a GPTQ quantized model, it has a conflict with the huggingface interface which expects QuantizationConfigMixin type.
      Current solution is removing quantization_config argument in HFLM._create_model() of lm_eval/models/huggingface.py.
      Require further modification to restore the functionality provided by the previous commit.
      
      * wrap quantization_config in AutoQuantizationConfig
      
      * handle quantization config not dict
      
      * wrap quantization_config in AutoQuantizationConfig if dict
      
      ---------
      Co-authored-by: default avatarshanhx2000 <hs359@duke.edu>
      fea4d11d
    • Baber's avatar
      nit · c5aa5cf0
      Baber authored
      c5aa5cf0
    • Baber's avatar
      add MetricConfig · 1b5c6f88
      Baber authored
      1b5c6f88
  8. 25 Jun, 2025 2 commits