Unverified Commit 64d95c44 authored by Aayush Neupane's avatar Aayush Neupane Committed by GitHub
Browse files

Add missing parameter definition in layoutlm config (#21960)

Four parameters in `LayoutLM` config were missing definitions, Added their definition (copied from BertConfig).
parent f3c75f8b
...@@ -72,6 +72,19 @@ class LayoutLMConfig(PretrainedConfig): ...@@ -72,6 +72,19 @@ class LayoutLMConfig(PretrainedConfig):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12): layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers. The epsilon used by the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
The value used to pad input_ids.
position_embedding_type (`str`, *optional*, defaults to `"absolute"`):
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
classifier_dropout (`float`, *optional*):
The dropout ratio for the classification head.
max_2d_position_embeddings (`int`, *optional*, defaults to 1024): max_2d_position_embeddings (`int`, *optional*, defaults to 1024):
The maximum value that the 2D position embedding might ever used. Typically set this to something large The maximum value that the 2D position embedding might ever used. Typically set this to something large
just in case (e.g., 1024). just in case (e.g., 1024).
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment