Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
471297ba
Commit
471297ba
authored
Jul 31, 2023
by
baberabb
Browse files
fixed generation_kwargs; added dependency groups to testing on CI
parent
b8510001
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
11 additions
and
8 deletions
+11
-8
.github/workflows/unit_tests.yml
.github/workflows/unit_tests.yml
+1
-1
lm_eval/models/anthropic_llms.py
lm_eval/models/anthropic_llms.py
+10
-7
No files found.
.github/workflows/unit_tests.yml
View file @
471297ba
...
...
@@ -55,7 +55,7 @@ jobs:
-
name
:
Install dependencies
run
:
|
python -m pip install --upgrade pip
pip install -e '.[testing]' --extra-index-url https://download.pytorch.org/whl/cpu
pip install -e '.[testing
,anthropic,sentencepiece
]' --extra-index-url https://download.pytorch.org/whl/cpu
# Install optional git dependencies
# pip install bleurt@https://github.com/google-research/bleurt/archive/b610120347ef22b494b6d69b4316e303f5932516.zip#egg=bleurt
# if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
...
...
lm_eval/models/anthropic_llms.py
View file @
471297ba
...
...
@@ -49,12 +49,12 @@ class AnthropicLM(LM):
def
__init__
(
self
,
batch_size
=
None
,
batch_size
:
int
=
1
,
model
:
str
=
"claude-2.0"
,
max_tokens_to_sample
:
int
=
256
,
temperature
:
float
=
1.
0
,
# defaults to 1
**
kwargs
:
Any
,
# top_p, top_k, etc.
):
# TODO: remove batch_size
temperature
:
float
=
0
,
# defaults to 1
**
kwargs
,
# top_p, top_k, etc.
):
"""Anthropic API wrapper.
:param model: str
...
...
@@ -119,13 +119,16 @@ class AnthropicLM(LM):
try
:
inp
=
request
[
0
]
request_args
=
request
[
1
]
until
=
request_args
[
"until"
]
# generation_kwargs
until
=
request_args
.
get
(
"until"
)
max_gen_toks
=
request_args
.
get
(
"max_gen_toks"
,
self
.
max_length
)
temperature
=
request_args
.
get
(
"temperature"
,
self
.
temperature
)
response
=
anthropic_completion
(
client
=
self
.
client
,
model
=
self
.
model
,
prompt
=
inp
,
max_tokens_to_sample
=
self
.
max_
tok
en
s
_to
_sample
,
temperature
=
self
.
temperature
,
# TODO: implement non-greedy sampling for Anthropic
max_tokens_to_sample
=
max_
g
en_to
ks
,
temperature
=
temperature
,
# TODO: implement non-greedy sampling for Anthropic
stop
=
until
,
**
self
.
kwargs
,
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment