• Michael Goin's avatar
    Fix CLI --batch_size arg for openai-completions/local-completions (#1656) · 9516087b
    Michael Goin authored
    The OpenAI interface supports batch size as an argument to the completions API, but does not seem to support specification of this on the CLI i.e. `lm_eval --model openai-completions --batch_size 16 ...` because of a simple lack of str->int conversion.
    
    This is confirmed by my usage and stacktrace from running `OPENAI_API_KEY=dummy lm_eval --model local-completions --tasks gsm8k --batch_size 16 --model_args model=nm-
    testing/zephyr-beta-7b-gptq-g128,tokenizer_backend=huggingface,base_url=http://localhost:8000/v1`:
    ```
    Traceback (most recent call last):
      File "/home/michael/venv/bin/lm_eval", line 8, in <module>
        sys.exit(cli_evaluate())
      File "/home/michael/code/lm-evaluation-harness/lm_eval/__main__.py", line 341, in cli_evaluate
        results = evaluator.simple_evaluate(
      File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
        return fn(*args, **kwargs)
      File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 251, in simple_evaluate
        results = evaluate(
      File "/home/michael/code/lm-evaluation-harness/lm_eval/utils.py", line 288, in _wrapper
        return fn(*args, **kwargs)
      File "/home/michael/code/lm-evaluation-harness/lm_eval/evaluator.py", line 390, in evaluate
        resps = getattr(lm, reqtype)(cloned_reqs)
      File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 263, in generate_until
        list(sameuntil_chunks(re_ord.get_reordered(), self.batch_size)),
      File "/home/michael/code/lm-evaluation-harness/lm_eval/models/openai_completions.py", line 251, in sameuntil_chunks
        if len(ret) >= size or x[1] != lastuntil:
    TypeError: '>=' not supported between instances of 'int' and 'str'
    ```
    9516087b
openai_completions.py 17 KB