Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
680c610f
Unverified
Commit
680c610f
authored
Dec 12, 2023
by
Arthur
Committed by
GitHub
Dec 12, 2023
Browse files
Hot-fix-mixstral-loss (#27948)
* fix loss computation * compute on GPU if possible
parent
4b759da8
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
src/transformers/models/mixtral/modeling_mixtral.py
src/transformers/models/mixtral/modeling_mixtral.py
+2
-1
No files found.
src/transformers/models/mixtral/modeling_mixtral.py
View file @
680c610f
...
...
@@ -95,7 +95,8 @@ def load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tenso
if
isinstance
(
gate_logits
,
tuple
):
# cat along the layers?
gate_logits
=
torch
.
cat
(
gate_logits
,
dim
=
0
)
compute_device
=
gate_logits
[
0
].
device
gate_logits
=
torch
.
cat
([
gate
.
to
(
compute_device
)
for
gate
in
gate_logits
],
dim
=
0
)
routing_weights
,
selected_experts
=
torch
.
topk
(
gate_logits
,
top_k
,
dim
=-
1
)
routing_weights
=
routing_weights
.
softmax
(
dim
=-
1
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment