Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
8a89b30c
Commit
8a89b30c
authored
Mar 16, 2023
by
Benjamin Fattori
Browse files
empty cuda cache after determing largest possible batch size
parent
97f936be
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
2 deletions
+4
-2
lm_eval/base.py
lm_eval/base.py
+4
-2
No files found.
lm_eval/base.py
View file @
8a89b30c
...
...
@@ -256,7 +256,7 @@ class BaseLM(LM):
# pull longest context sample from request
_
,
context_enc
,
continuation_enc
=
re_ord
.
get_reordered
()[
0
]
max_context
=
len
((
context_enc
+
continuation_enc
)[
-
(
self
.
max_length
+
1
)
:][:
-
1
])
if
(
self
.
batch_size
==
'auto'
):
if
override_bs
is
None
:
...
...
@@ -268,11 +268,13 @@ class BaseLM(LM):
return
batch_size
batch_size
=
forward_batch
()
print
(
f
"Determined
L
argest batch size:
{
batch_size
}
"
)
print
(
f
"Determined
l
argest batch size:
{
batch_size
}
"
)
adaptive_batch_size
=
batch_size
else
:
adaptive_batch_size
=
override_bs
torch
.
cuda
.
empty_cache
()
# empty cache after determining batch size
for
chunk
in
utils
.
chunks
(
tqdm
(
re_ord
.
get_reordered
(),
disable
=
disable_tqdm
),
self
.
batch_size
if
self
.
batch_size
!=
"auto"
else
adaptive_batch_size
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment