"device_operation/include/device_batched_gemm_xdl.hpp" did not exist on "6f928a08765e2110caf8ef20586c29d5e414ff71"
  1. 01 Mar, 2024 1 commit
  2. 28 Feb, 2024 1 commit
  3. 27 Feb, 2024 4 commits
    • Rich's avatar
      Fix AttributeError in huggingface.py When 'model_type' is Missing (#1489) · cc771eca
      Rich authored
      
      
      * model_type attribute error
      
      Getting attribute error when using a model without a 'model_type'
      
      * fix w/ and w/out the 'model_type' specification
      
      * use getattr(), also fix other config.model_type reference
      
      * Update huggingface.py
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      cc771eca
    • Hailey Schoelkopf's avatar
    • Zehan Li's avatar
      add multilingual mmlu eval (#1484) · 7cd004c4
      Zehan Li authored
      7cd004c4
    • Baber Abbasi's avatar
      Refactor `evaluater.evaluate` (#1441) · 5ccd65d4
      Baber Abbasi authored
      
      
      * change `all_gather` to `gather`
      
      * add TaskOutput utility class
      
      * Add FilterResults class and refactor task handling.
      
      * Rename `key` to `filter_key` for clarity
      
      * Add `print_writeout` function in utils.py
      
      * Add function to calculate limit size.
      
      * Add doc_iterator method to Task class
      
      * Refactor `doc_iterator` and cleanup in Task class
      
      * remove superfluous bits
      
      * change `all_gather` to `gather`
      
      * bugfix
      
      * bugfix
      
      * fix `gather`
      
      * Refactor `gather` loop
      
      * Refactor aggregate metrics calculation
      
      * Refactor and simplify aggregate metrics calculation
      Removed unused code
      
      * Simplify metrics calculation and remove unused code.
      
      * simplify the metrics calculation in `utils.py` and `evaluator.py`.
      
      * Fix group metric
      
      * change evaluate to hf_evaluate
      
      * change evaluate to hf_evaluate
      
      * add docs
      
      * add docs
      
      * nits
      
      * make isslice keyword only
      
      * nit
      
      * add todo
      
      * nit
      
      * nit
      
      * nit: swap order samples_metrics tuple
      
      * move instance sorting outside loop
      
      * nit
      
      * nit
      
      * Add __repr__ for ConfigurableTask
      
      * nit
      
      * nit
      
      * Revert "nit"
      
      This reverts commit dab8d9977a643752a17f840fd8cf7e4b107df28f.
      
      * fix some logging
      
      * nit
      
      * fix `predict_only` bug. thanks to `@LSinev`!
      
      * change `print_tasks` to `prepare_print_tasks`
      
      * nits
      
      * move eval utils
      
      * move eval utils
      
      * nit
      
      * add comment
      
      * added tqdm descriptions
      
      * Update lm_eval/evaluator_utils.py
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * fix mgsm bug
      
      * nit
      
      * fix `build_all_requests`
      
      * pre-commit
      
      * add ceil to limit
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      5ccd65d4
  4. 26 Feb, 2024 7 commits
  5. 24 Feb, 2024 1 commit
    • LSinev's avatar
      Add environment and transformers version logging in results dump (#1464) · f78e2da4
      LSinev authored
      * Save git_hash to results even if git is not available to call as subprocess
      
      * Store more info about environment and transformers version in results to help researchers track inconsistencies
      
      * moved added logging to logging_utils
      
      * moved get_git_commit_hash to logging_utils.py
      
      * moved add_env_info inside evaluator
      f78e2da4
  6. 23 Feb, 2024 2 commits
  7. 22 Feb, 2024 5 commits
  8. 21 Feb, 2024 1 commit
    • Hanwool Albert Lee's avatar
      Added KMMLU evaluation method and changed ReadMe (#1447) · c26a6ac7
      Hanwool Albert Lee authored
      
      
      * update kmmlu default formatting
      
      * Update _default_kmmlu_yaml
      
      * Delete lm_eval/tasks/kmmlu/utils.py
      
      * new tasks implemented
      
      * add direct tasks
      
      * update direct evaluate
      
      * update direct eval
      
      * add cot sample
      
      * update cot
      
      * add cot
      
      * Update _cot_kmmlu_yaml
      
      * add kmmlu90
      
      * Update and rename _cot_kmmlu.yaml to _cot_kmmlu_yaml
      
      * Create kmmlu90.yaml
      
      * Update _cot_kmmlu_yaml
      
      * add direct
      
      * Update _cot_kmmlu_yaml
      
      * Update and rename kmmlu90.yaml to kmmlu90_cot.yaml
      
      * Update kmmlu90_direct.yaml
      
      * add kmmlu hard
      
      * Update _cot_kmmlu_yaml
      
      * Update _cot_kmmlu_yaml
      
      * update cot
      
      * update cot
      
      * erase typo
      
      * Update _cot_kmmlu_yaml
      
      * update cot
      
      * Rename dataset to match k-mmlu-hard
      
      * removed kmmlu90
      
      * fixed name 'kmmlu_cot' to 'kmmlu_hard_cot' and revised README
      
      * applied pre-commit before pull requests
      
      * rename datasets and add notes
      
      * Remove DS_Store cache
      
      * Update lm_eval/tasks/kmmlu/README.md
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * Change citations and reflect reviews on version
      
      * Added kmmlu_hard and fixed other errors
      
      * fixing minor errors
      
      * remove duplicated
      
      * Rename files
      
      * try ".index"
      
      * minor fix
      
      * minor fix again
      
      * fix revert.
      
      * minor fix. thank for hailey
      
      ---------
      Co-authored-by: default avatarGUIJIN SON <spthsrbwls123@yonsei.ac.kr>
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      c26a6ac7
  9. 20 Feb, 2024 3 commits
  10. 19 Feb, 2024 2 commits
  11. 18 Feb, 2024 1 commit
  12. 15 Feb, 2024 1 commit
  13. 14 Feb, 2024 1 commit
  14. 13 Feb, 2024 1 commit
  15. 12 Feb, 2024 2 commits
  16. 11 Feb, 2024 3 commits
    • Uanu's avatar
      Add multilingual TruthfulQA task (#1420) · 7397b965
      Uanu authored
      7397b965
    • Uanu's avatar
      Add multilingual ARC task (#1419) · 0256c682
      Uanu authored
      0256c682
    • Baber Abbasi's avatar
      Evaluate (#1385) · 1ff84897
      Baber Abbasi authored
      * un-exclude `evaluate.py` from linting
      
      * readability
      
      * readability
      
      * add task name to build info message
      
      * fix link
      
      * nit
      
      * add functions for var and mean pooling
      
      * add functions for var and mean pooling
      
      * metadata compatibility with task
      
      * rename `override_config` to `set_config` and move to `Task`
      
      * add unit test
      
      * nit
      
      * nit
      
      * bugfix
      
      * nit
      
      * nit
      
      * nit
      
      * add docstrings
      
      * fix metadata-fewshot
      
      * revert metric refactor
      
      * nit
      
      * type checking
      
      * type hints
      
      * type hints
      
      * move `override_metric` to `Task`
      
      * change metadata
      
      * change name
      
      * pre-commit
      
      * rename
      
      * remove
      
      * remove
      
      * `override_metric` backwards compatible with `Task`
      
      * type hints
      
      * use generic
      
      * type hint
      1ff84897
  17. 10 Feb, 2024 2 commits
  18. 09 Feb, 2024 1 commit
  19. 07 Feb, 2024 1 commit