Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
ffda60ab
Commit
ffda60ab
authored
Jul 02, 2024
by
Nathan Habib
Browse files
removing float16 convertion in logmax
parent
e377c47f
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
lm_eval/models/huggingface.py
lm_eval/models/huggingface.py
+1
-1
No files found.
lm_eval/models/huggingface.py
View file @
ffda60ab
...
...
@@ -1132,7 +1132,7 @@ class HFLM(TemplateLM):
multi_logits
=
F
.
log_softmax
(
self
.
_model_call
(
batched_inps
,
**
call_kwargs
),
dim
=-
1
,
dtype
=
torch
.
float16
,
#
dtype=torch.float16,
)
# [batch, padding_length (inp or cont), vocab]
for
(
request_str
,
ctx_tokens
,
_
),
logits
,
inplen
,
cont_toks
in
zip
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment