1. 18 Jan, 2024 1 commit
  2. 16 Jan, 2024 2 commits
  3. 15 Jan, 2024 7 commits
  4. 12 Jan, 2024 3 commits
  5. 11 Jan, 2024 3 commits
  6. 10 Jan, 2024 3 commits
  7. 08 Jan, 2024 2 commits
  8. 05 Jan, 2024 2 commits
  9. 04 Jan, 2024 2 commits
  10. 02 Jan, 2024 3 commits
  11. 30 Dec, 2023 1 commit
  12. 29 Dec, 2023 1 commit
    • Paul McCann's avatar
      Don't silence errors when loading tasks (#1148) · 34b563b1
      Paul McCann authored
      
      
      * Add example failing task
      
      This task includes an invalid import. This will cause an exception and
      the task will not be loaded. But this just results in a DEBUG level log
      message, so in normal usage you'll see no error, and will be told the
      task doesn't exist.
      
      Here's an example command line to run the task:
      
          python -m lm_eval --model hf --model_args pretrained=rinna/japanese-gpt-1b --tasks fail
      
      This task is based on a Japanese Winograd task, but that's not
      important, and was just used due to familiarity.
      
      * Do not ignore errors when loading tasks
      
      * Change how task errors are logged
      
      This makes the proposed changes from PR discussion.
      
      1. Exceptions not related to missing modules/imports are logged as
         warnings.
      
      2. module/import related exceptions are still logged at debug level, but
         if any of them happen there is a warning about it with instructions
         on how to show logs.
      
      * Remove intentionally failing task
      
      ---------
      Co-authored-by: default avatarPaul O'Leary McCann <polm@dampfkraft.com>
      34b563b1
  13. 28 Dec, 2023 1 commit
  14. 27 Dec, 2023 2 commits
  15. 25 Dec, 2023 1 commit
  16. 24 Dec, 2023 2 commits
  17. 23 Dec, 2023 2 commits
  18. 22 Dec, 2023 2 commits
    • Anjor Kanekar's avatar
    • Hailey Schoelkopf's avatar
      Upstream Mamba Support (`mamba_ssm`) (#1110) · 5503b274
      Hailey Schoelkopf authored
      * modularize HFLM code
      
      * pass through extra kwargs to AutoModel.from_pretrained call
      
      * remove explicit model_kwargs
      
      * rename gptq -> autogptq
      
      * fix tokenizer pad token errors
      
      * ensure model always respects device_map and autogptq's selected devices
      
      * add a _get_config helper fn
      
      * add mambaLMWrapper
      
      * add mamba extra
      
      * add mamba extra
      
      * fix conditional import
      
      * Fix botched merge commit
      
      * Remove beginning-of-file comment for consistency
      
      * Add docstring for mambaLM re: supported kwargs
      
      * Alphabetize extras
      
      * Update extras table
      
      * appease precommit
      
      * run precommit on mamba_lm
      5503b274