1. 04 Sep, 2024 1 commit
  2. 01 Aug, 2024 1 commit
  3. 29 Jul, 2024 1 commit
    • Baber Abbasi's avatar
      bugfix and docs for API (#2139) · b70af4f5
      Baber Abbasi authored
      
      
      * encoding bugfix
      
      * encoding bugfix
      
      * overload logliklehood rather than loglikehood_tokens
      
      * add custom tokenizer
      
      * add docs
      
      * Update API_guide.md
      
      fix link; add note
      
      * Update API_guide.md
      
      typo
      
      * pre-commit
      
      * add link in readme
      
      * nit
      
      * nit
      
      * nit
      
      * Update API_guide.md
      
      nits
      
      * Update API_guide.md
      
      * Update API_guide.md
      
      * Update API_guide.md
      
      * Update API_guide.md
      
      * Update README.md
      
      * Update docs/API_guide.md
      
      * Update docs/API_guide.md
      
      * Update API_guide.md
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      b70af4f5
  4. 22 Jul, 2024 1 commit
    • Baber Abbasi's avatar
      Refactor API models (#2008) · 42dc2448
      Baber Abbasi authored
      
      
      * refactor pad_token handling to fn
      
      * fix docs
      
      * add pad_token_handling to vllm
      
      * start on API superclass
      
      * don't detokenize the returned logits
      
      * streamline vllm tokenizer
      
      * add type hint
      
      * pre-commit
      
      * seems to be in working order
      
      * add model to init
      
      * refactor api models
      
      * nit
      
      * cleanup
      
      * add pbar
      
      * fix type hints
      
      * change optional dependencies
      
      * json encode chat template
      
      * add type hints
      
      * deal with different prompt input requiremnts
      
      * nits
      
      * fix
      
      * cache inside async
      
      * fix
      
      * fix
      
      * nits
      
      * nits
      
      * nits
      
      * nit
      
      * fixup
      
      * fixup
      
      * nit
      
      * add dummy retry
      
      * add dummy retry
      
      * handle imports; skip failing test
      
      * add type hint
      
      * add tests
      
      * add dependency to tests
      
      * add package names to exception
      
      * nit
      
      * docs; type hints
      
      * handle api key
      
      * nit
      
      * tokenizer bug
      
      * fix tokenizer
      
      * nit
      
      * nit
      
      * add better error messages
      
      * nit
      
      * remove decorator
      
      * CI: install api dep
      
      * revert evaluator.py
      
      * consolidate
      
      * consolidate
      
      * nits
      
      * nit
      
      * fix typealias
      
      * nit
      
      * nit
      
      * nit
      
      * Update lm_eval/models/api_models.py
      
      typo
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * Update lm_eval/models/openai_completions.py
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * Update lm_eval/models/anthropic_llms.py
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * Update lm_eval/models/api_models.py
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      
      * fix typo
      
      * add news section
      
      * add info for API
      
      * pre-commit
      
      * typo
      
      * fix bug: unpack logliklehood requests
      
      * fix bug: shared gen_kwargs mutated
      
      * nit: handle copy properly
      
      * Update README.md
      
      * Update README.md
      
      * Update README.md
      
      * Update api_models.py
      
      * Update README.md
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      42dc2448
  5. 05 May, 2024 2 commits
  6. 01 Apr, 2024 1 commit
    • Michael Goin's avatar
      Fix CLI --batch_size arg for openai-completions/local-completions (#1656) · 9516087b
      Michael Goin authored
      The OpenAI interface supports batch size as an argument to the completions API, but does not seem to support specification of this on the CLI i.e. `lm_eval --model openai-completions --batch_size 16 ...` because of a simple lack of str->int conversion.
      
      This is confirmed by my usage and stacktrace from running `OPENAI_API_KEY=dummy lm_eval --model local-completions --tasks gsm8k --batch_size 16 --model_args model=nm-
      testing/zephyr-beta-7b-gptq-g128,tokenizer_backend=huggingface,base_url=http://localhost:8000/v1`:
      ```
      Traceback (most recent call last):
        File "/home/michael/venv/bin/lm_eval", line 8, in <module>
          sys.exit(cli_evaluate())
        File "/home/michael/code/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate
          results = evaluator.simple_evaluate(
        File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
          return fn(*args, **kwargs)
        File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 251, in simple_evaluate
          results = evaluate(
        File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
          return fn(*args, **kwargs)
        File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 390, in evaluate
          resps = getattr(lm, reqtype)(cloned_reqs)
        File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 263, in generate_until
          list(sameuntil_chunks(re_ord.get_reordered(), self.batch_size)),
        File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 251, in sameuntil_chunks
          if len(ret) >= size or x[1] != lastuntil:
      TypeError: '>=' not supported between instances of 'int' and 'str'
      ```
      9516087b
  7. 21 Mar, 2024 1 commit
  8. 13 Mar, 2024 1 commit
  9. 06 Mar, 2024 1 commit
  10. 22 Feb, 2024 2 commits
  11. 14 Feb, 2024 1 commit
  12. 22 Jan, 2024 1 commit
  13. 02 Jan, 2024 1 commit
  14. 27 Dec, 2023 1 commit
    • Baber Abbasi's avatar
      nits + fix siqa (#1216) · 6a1c19ed
      Baber Abbasi authored
      * fix group
      
      * siqa: default.yml -> default.yaml
      
      * max_gen_toks -> self.max_gen_toks
      
      * add ids to task tests
      
      * fix siqa
      
      * fix gen_kwargs for openai-chat
      6a1c19ed
  15. 22 Dec, 2023 1 commit
    • Zach Schillaci's avatar
      Generic decorator for handling rate limit errors (#1109) · 046ea6e2
      Zach Schillaci authored
      
      
      * Add retry error handler
      
      * fixup! Add retry error handler
      
      * Move to utils.py
      
      * Run isort on utils.py
      
      * Catch multiple exceptions
      
      * Update LMs with exception handler
      
      * Fixes to anthropic retry handler
      
      * fix callback kwarg
      
      * Update textsynth.py
      
      * fix python 3.8 incompatibility
      
      * fix indenterror I introduced
      
      * placate linter?
      
      * Update on_exception_callback kwarg name
      
      * fixup! Merge branch 'main' into add-retry-error-handler
      
      * fixup! fixup! Merge branch 'main' into add-retry-error-handler
      
      * Merge conflicts are fun
      
      * Run pre-commit
      
      ---------
      Co-authored-by: default avatarHailey Schoelkopf <65563625+haileyschoelkopf@users.noreply.github.com>
      046ea6e2
  16. 21 Dec, 2023 2 commits
  17. 20 Dec, 2023 2 commits
    • Vicki Boykis's avatar
      Implementing local OpenAI API-style chat completions on any given inference server (#1174) · fcfc0c60
      Vicki Boykis authored
      * LocalChatCompletionsLM add
      
      * clean up completions class
      
      * clean up completions class
      
      * update tokens
      
      * README
      
      * fix constructor
      
      * eos token
      
      * folding local-chat-completions into OpenAIChatCompletions
      
      * refactoring to include gen_kwargs as passable option
      
      * add todo on chat completion kwarg validation
      
      * Ruff and README fix
      
      * generalize to **kwargs
      
      * remove unnecessary kwargs
      
      * README and remove kwargs
      
      * README
      fcfc0c60
    • Baber Abbasi's avatar
      Switch Linting to `ruff` (#1166) · 65b8761d
      Baber Abbasi authored
      * add ruff and isort. remove black and flake8
      
      * remove unnecessary dependencies
      
      * remove dependency from table
      
      * change order
      
      * ran ruff
      
      * check 3.9
      
      * exclude evaluator
      
      * update CI workflow
      
      * use ruff config in pyproject.toml
      
      * test
      
      * add isort rules to ruff
      
      * sort imports
      
      * import `make_table`
      
      * try stages for no-commit-to-branch
      
      * turn on mypy for pre-commit
      
      * test
      
      * test
      
      * test
      
      * change no-commit-to-branch to default
      
      * nits
      
      * fixed dependency
      65b8761d
  18. 16 Dec, 2023 1 commit
    • Baber Abbasi's avatar
      openai nits (#1139) · 8f5b2295
      Baber Abbasi authored
      * fixed syntactic nits
      
      * fix temperature and seed
      
      * fix logprobs
      
      * fixup merge
      8f5b2295
  19. 15 Dec, 2023 1 commit
  20. 28 Nov, 2023 1 commit
  21. 27 Nov, 2023 3 commits
  22. 24 Nov, 2023 1 commit
  23. 21 Nov, 2023 3 commits
  24. 17 Oct, 2023 1 commit
  25. 09 Sep, 2023 1 commit
  26. 07 Sep, 2023 1 commit
  27. 25 Aug, 2023 1 commit
  28. 07 Aug, 2023 2 commits
  29. 06 Aug, 2023 1 commit
  30. 28 Jun, 2023 1 commit
  31. 19 Jun, 2023 1 commit