1. 16 Sep, 2025 1 commit
  2. 30 Jul, 2025 4 commits
    • Baber's avatar
      nit · 13aa5096
      Baber authored
      13aa5096
    • Baber's avatar
      fix max_gen_toks · d985629d
      Baber authored
      d985629d
    • Baber's avatar
      fix · f4b4f486
      Baber authored
      f4b4f486
    • Baber's avatar
      fix · c75c66e4
      Baber authored
      c75c66e4
  3. 29 Jul, 2025 3 commits
  4. 24 Jul, 2025 2 commits
  5. 23 Jul, 2025 4 commits
  6. 22 Jul, 2025 2 commits
  7. 19 Jul, 2025 4 commits
  8. 18 Jul, 2025 3 commits
  9. 16 Jul, 2025 2 commits
    • philipdoldo's avatar
      `bbh_cot_fewshot`: Removed repeated "Let''s think step by step." text from bbh cot prompts (#3140) · c2be7211
      philipdoldo authored
      
      
      * Removed the 'Let''s think step by step.' text from the start of the target entry in each of the samples to prevent this phrase from being repeated twice in the few-shot prompts and to match the behavior from the original bbh repository. Worth noting that this applied to only 26 out of 27 subtasks, the only one it did not apply to is boolean_expressions.yaml. When it comes to boolean_expressions.yaml, in my opinion there is an error in that it doesn't say the 'Remember that (i) ...' text after the final 'A: Let's think step by step.' in the prompt. Models like EleutherAI/gpt-neo-125m seem to always begin answers with this string anyway (copying what was done in the few-shot prompts), but I think it really should've been part of the prompt, much like how 'A: Let's think step by step.' is included in the prompt for all of the cot tasks. However, the original bbh repo also has this issue, so I think it is fine to keep it this way for consistency, but just thought I'd point it out anyway.
      
      * feat: remove extra space from answers; add changelog
      
      ---------
      Co-authored-by: default avatarBaber <baber@hey.com>
      c2be7211
    • Baber Abbasi's avatar
      truncate thinking tags in generations (#3145) · 51ede33c
      Baber Abbasi authored
      * feat: add postprocessing for generated text to strip stop sequences and thinking tokens
      
      * nit
      
      * fix: trim leading whitespace after stripping thinking tokens from generation
      
      * feat: add think_end_token to model_args
      
      * nit
      
      * nit
      
      * nit
      
      * add to readme
      
      * nit
      51ede33c
  10. 15 Jul, 2025 1 commit
  11. 14 Jul, 2025 3 commits
  12. 10 Jul, 2025 2 commits
  13. 06 Jul, 2025 3 commits
  14. 05 Jul, 2025 4 commits
  15. 04 Jul, 2025 1 commit
  16. 03 Jul, 2025 1 commit
    • Ankush's avatar
      Bugfix/hf tokenizer gguf override (#3098) · ff41a856
      Ankush authored
      * fix(hf-gguf): skip gguf_file if external tokenizer is provided
      
      * docs(readme): add instructions for evaluating GGUF models with Hugging Face backend
      ff41a856