Unverified Commit 395e566a authored by Pedro Cuenca's avatar Pedro Cuenca Committed by GitHub
Browse files

gpt-bigcode: avoid `zero_` to support Core ML (#24755)

gpt-bigcode: avoid `zeros_` to support Core ML.

In-place `zeros_` is not supported by the Core ML conversion process.
This PR replaces it with `zeros_like` so conversion can proceed.

The change only affects a workaround for a PyTorch bug on the `cpu`
device.
parent 02842855
...@@ -164,7 +164,7 @@ class GPTBigCodeAttention(nn.Module): ...@@ -164,7 +164,7 @@ class GPTBigCodeAttention(nn.Module):
# This is needed because of a bug in pytorch https://github.com/pytorch/pytorch/issues/80588. # This is needed because of a bug in pytorch https://github.com/pytorch/pytorch/issues/80588.
# The bug was fixed in https://github.com/pytorch/pytorch/pull/96086, # The bug was fixed in https://github.com/pytorch/pytorch/pull/96086,
# but the fix has not been released as of pytorch version 2.0.0. # but the fix has not been released as of pytorch version 2.0.0.
attn_weights.zero_() attn_weights = torch.zeros_like(attn_weights)
beta = 1 beta = 1
else: else:
beta = 0 beta = 0
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment