"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "f82ee109e6e58e19c21e631a2354af3b00da9a3c"
Unverified Commit c618ab4f authored by joaoareis's avatar joaoareis Committed by GitHub
Browse files

Fix DecisionTransformerConfig doctring (#23450)

parent 5777c3cb
...@@ -57,9 +57,9 @@ class DecisionTransformerConfig(PretrainedConfig): ...@@ -57,9 +57,9 @@ class DecisionTransformerConfig(PretrainedConfig):
n_positions (`int`, *optional*, defaults to 1024): n_positions (`int`, *optional*, defaults to 1024):
The maximum sequence length that this model might ever be used with. Typically set this to something large The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048). just in case (e.g., 512 or 1024 or 2048).
n_layer (`int`, *optional*, defaults to 12): n_layer (`int`, *optional*, defaults to 3):
Number of hidden layers in the Transformer encoder. Number of hidden layers in the Transformer encoder.
n_head (`int`, *optional*, defaults to 12): n_head (`int`, *optional*, defaults to 1):
Number of attention heads for each attention layer in the Transformer encoder. Number of attention heads for each attention layer in the Transformer encoder.
n_inner (`int`, *optional*): n_inner (`int`, *optional*):
Dimensionality of the inner feed-forward layers. If unset, will default to 4 times `n_embd`. Dimensionality of the inner feed-forward layers. If unset, will default to 4 times `n_embd`.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment