Unverified Commit 95dc093b authored by Daniel Hernandez Garcia's avatar Daniel Hernandez Garcia Committed by GitHub
Browse files

[BugFix] gemma loading weights "lm_head.weight" key error (#577)

parent d9ac6392
......@@ -310,6 +310,10 @@ class GemmaForCausalLM(nn.Module):
weight_loader(param, loaded_weight, shard_id)
break
else:
# lm_head is not used in vllm as it is tied with embed_token.
# To prevent errors, skip loading lm_head.weight.
if "lm_head.weight" in name:
continue
# Skip loading extra bias for GPTQ models.
if name.endswith(".bias") and name not in params_dict:
continue
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment