Unverified Commit 3de6a6b4 authored by Aditya Kane's avatar Aditya Kane Committed by GitHub
Browse files

Update configuration_llama.py: fixed broken link (#28946)



* Update configuration_llama.py: fix broken link

* [Nit] Explicit redirection not required
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>

---------
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
parent 3e70a207
...@@ -78,7 +78,7 @@ class LlamaConfig(PretrainedConfig): ...@@ -78,7 +78,7 @@ class LlamaConfig(PretrainedConfig):
End of stream token id. End of stream token id.
pretraining_tp (`int`, *optional*, defaults to 1): pretraining_tp (`int`, *optional*, defaults to 1):
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
issue](https://github.com/pytorch/pytorch/issues/76232). issue](https://github.com/pytorch/pytorch/issues/76232).
tie_word_embeddings (`bool`, *optional*, defaults to `False`): tie_word_embeddings (`bool`, *optional*, defaults to `False`):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment