1. 07 Apr, 2024 1 commit
  2. 01 Apr, 2024 1 commit
    • Michael Goin's avatar
      Fix CLI --batch_size arg for openai-completions/local-completions (#1656) · 9516087b
      Michael Goin authored
      The OpenAI interface supports batch size as an argument to the completions API, but does not seem to support specification of this on the CLI i.e. `lm_eval --model openai-completions --batch_size 16 ...` because of a simple lack of str->int conversion.
      
      This is confirmed by my usage and stacktrace from running `OPENAI_API_KEY=dummy lm_eval --model local-completions --tasks gsm8k --batch_size 16 --model_args model=nm-
      testing/zephyr-beta-7b-gptq-g128,tokenizer_backend=huggingface,base_url=http://localhost:8000/v1`:
      ```
      Traceback (most recent call last):
        File "/home/michael/venv/bin/lm_eval", line 8, in <module>
          sys.exit(cli_evaluate())
        File "/home/michael/code/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate
          results = evaluator.simple_evaluate(
        File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
          return fn(*args, **kwargs)
        File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 251, in simple_evaluate
          results = evaluate(
        File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
          return fn(*args, **kwargs)
        File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 390, in evaluate
          resps = getattr(lm, reqtype)(cloned_reqs)
        File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 263, in generate_until
          list(sameuntil_chunks(re_ord.get_reordered(), self.batch_size)),
        File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 251, in sameuntil_chunks
          if len(ret) >= size or x[1] != lastuntil:
      TypeError: '>=' not supported between instances of 'int' and 'str'
      ```
      9516087b
  3. 27 Mar, 2024 1 commit
  4. 26 Mar, 2024 1 commit
    • Sergio Perez's avatar
      Integration of NeMo models into LM Evaluation Harness library (#1598) · e9d429e1
      Sergio Perez authored
      * Integration of NeMo models into LM Evaluation Harness library
      
      * rename nemo model as nemo_lm
      
      * move nemo section in readme after hf section
      
      * use self.eot_token_id in get_until()
      
      * improve progress bar showing loglikelihood requests
      
      * data replication or tensor/pipeline replication working fine within one node
      
      * run pre-commit on modified files
      
      * check whether dependencies are installed
      
      * clarify usage of torchrun in README
      e9d429e1
  5. 25 Mar, 2024 2 commits
  6. 21 Mar, 2024 1 commit
  7. 20 Mar, 2024 1 commit
  8. 19 Mar, 2024 2 commits
  9. 18 Mar, 2024 1 commit
  10. 17 Mar, 2024 1 commit
  11. 13 Mar, 2024 1 commit
  12. 09 Mar, 2024 1 commit
  13. 06 Mar, 2024 1 commit
  14. 03 Mar, 2024 1 commit
    • Baber Abbasi's avatar
      Vllm update DP+TP (#1508) · e5e35fca
      Baber Abbasi authored
      * use `@ray.remote` with distributed vLLM
      
      * update versions
      
      * bugfix
      
      * unpin vllm
      
      * fix pre-commit
      
      * added version assertion error
      
      * Revert "added version assertion error"
      
      This reverts commit 8041e9b78e95eea9f4f4d0dc260115ba8698e9cc.
      
      * added version assertion for DP
      
      * expand DP note
      
      * add warning
      
      * nit
      
      * pin vllm
      
      * fix typos
      e5e35fca
  15. 01 Mar, 2024 2 commits
  16. 28 Feb, 2024 1 commit
  17. 27 Feb, 2024 2 commits
    • Rich's avatar
      Fix AttributeError in huggingface.py When 'model_type' is Missing (#1489) · cc771eca
      Rich authored
      
      
      * model_type attribute error
      
      Getting attribute error when using a model without a 'model_type'
      
      * fix w/ and w/out the 'model_type' specification
      
      * use getattr(), also fix other config.model_type reference
      
      * Update huggingface.py
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      cc771eca
    • Baber Abbasi's avatar
      Refactor `evaluater.evaluate` (#1441) · 5ccd65d4
      Baber Abbasi authored
      
      
      * change `all_gather` to `gather`
      
      * add TaskOutput utility class
      
      * Add FilterResults class and refactor task handling.
      
      * Rename `key` to `filter_key` for clarity
      
      * Add `print_writeout` function in utils.py
      
      * Add function to calculate limit size.
      
      * Add doc_iterator method to Task class
      
      * Refactor `doc_iterator` and cleanup in Task class
      
      * remove superfluous bits
      
      * change `all_gather` to `gather`
      
      * bugfix
      
      * bugfix
      
      * fix `gather`
      
      * Refactor `gather` loop
      
      * Refactor aggregate metrics calculation
      
      * Refactor and simplify aggregate metrics calculation
      Removed unused code
      
      * Simplify metrics calculation and remove unused code.
      
      * simplify the metrics calculation in `utils.py` and `evaluator.py`.
      
      * Fix group metric
      
      * change evaluate to hf_evaluate
      
      * change evaluate to hf_evaluate
      
      * add docs
      
      * add docs
      
      * nits
      
      * make isslice keyword only
      
      * nit
      
      * add todo
      
      * nit
      
      * nit
      
      * nit: swap order samples_metrics tuple
      
      * move instance sorting outside loop
      
      * nit
      
      * nit
      
      * Add __repr__ for ConfigurableTask
      
      * nit
      
      * nit
      
      * Revert "nit"
      
      This reverts commit dab8d9977a643752a17f840fd8cf7e4b107df28f.
      
      * fix some logging
      
      * nit
      
      * fix `predict_only` bug. thanks to `@LSinev`!
      
      * change `print_tasks` to `prepare_print_tasks`
      
      * nits
      
      * move eval utils
      
      * move eval utils
      
      * nit
      
      * add comment
      
      * added tqdm descriptions
      
      * Update lm_eval/evaluator_utils.py
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * fix mgsm bug
      
      * nit
      
      * fix `build_all_requests`
      
      * pre-commit
      
      * add ceil to limit
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      5ccd65d4
  18. 26 Feb, 2024 4 commits
  19. 22 Feb, 2024 2 commits
  20. 20 Feb, 2024 1 commit
  21. 18 Feb, 2024 1 commit
  22. 14 Feb, 2024 1 commit
  23. 10 Feb, 2024 2 commits
  24. 07 Feb, 2024 1 commit
  25. 06 Feb, 2024 1 commit
  26. 05 Feb, 2024 1 commit
  27. 01 Feb, 2024 1 commit
  28. 31 Jan, 2024 3 commits
    • Baber Abbasi's avatar
      add bypass metric (#1156) · f8203de1
      Baber Abbasi authored
      * add bypass metric
      
      * fixed `bypass` metric.
      
      * add task attributes if predict_only
      
      * add `predict_only` checks
      
      * add docs
      
      * added `overide_metric`, `override_config` to `Task`
      
      * nits
      
      * nit
      
      * changed --predict_only to generations; nits
      
      * nits
      
      * nits
      
      * change gen_kwargs warning
      
      * add note about `--predict_only` in README.md
      
      * added `predict_only`
      
      * move table to bottom
      
      * nit
      
      * change null aggregation to bypass (conflict)
      
      * bugfix; default `temp=0.0`
      
      * typo
      f8203de1
    • Eugene Cheah's avatar
      Add support for RWKV models with World tokenizer (#1374) · 084b7050
      Eugene Cheah authored
      
      
      * Add support for RWKV models with World tokenizer
      
      The RWKV line of model with the World tokenizer, does not allow the padding token to be configured, and has its value preset as 0
      
      This however fails all the "if set" checks, and would cause the tokenizer to crash.
      
      A tokenizer class name check was added, in addition to a model type check, as there exists RWKV models which uses the neox tokenizers
      
      * Update huggingface.py
      
      Genericized so that this supports any RWKVWorld tokenizer, and added a fall-back for if the HF implementation name changes.
      
      * Comply with formatting guidelines
      
      * fix format
      
      ---------
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      084b7050
    • Hailey Schoelkopf's avatar
      Fix unintuitive `--gen_kwargs` behavior (#1329) · bd7d265a
      Hailey Schoelkopf authored
      * don't override do_sample if no value for it is passed
      
      * Update gen_kwargs override condition
      
      * Update huggingface.py
      
      * Update huggingface.py
      
      * run linters
      
      * silence an erroneous warning
      bd7d265a
  29. 28 Jan, 2024 1 commit