Unverified Commit ad08137e authored by Daniil's avatar Daniil Committed by GitHub
Browse files

[docstring] Fix docstrings for `CodeGen` (#26821)



* remove docstrings CodeGen from objects_to_ignore

* autofix codegen docstrings

* fill in the missing types and docstrings

* fixup

* change descriptions to be in a separate line

* apply docstring suggestions from code review
Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>

* update n_ctx description in CodeGenConfig

---------
Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>
parent bdbcd5d4
...@@ -57,6 +57,8 @@ class CodeGenConfig(PretrainedConfig): ...@@ -57,6 +57,8 @@ class CodeGenConfig(PretrainedConfig):
n_positions (`int`, *optional*, defaults to 2048): n_positions (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048). just in case (e.g., 512 or 1024 or 2048).
n_ctx (`int`, *optional*, defaults to 2048):
This attribute is used in `CodeGenModel.__init__` without any real effect.
n_embd (`int`, *optional*, defaults to 4096): n_embd (`int`, *optional*, defaults to 4096):
Dimensionality of the embeddings and hidden states. Dimensionality of the embeddings and hidden states.
n_layer (`int`, *optional*, defaults to 28): n_layer (`int`, *optional*, defaults to 28):
...@@ -65,22 +67,29 @@ class CodeGenConfig(PretrainedConfig): ...@@ -65,22 +67,29 @@ class CodeGenConfig(PretrainedConfig):
Number of attention heads for each attention layer in the Transformer encoder. Number of attention heads for each attention layer in the Transformer encoder.
rotary_dim (`int`, *optional*, defaults to 64): rotary_dim (`int`, *optional*, defaults to 64):
Number of dimensions in the embedding that Rotary Position Embedding is applied to. Number of dimensions in the embedding that Rotary Position Embedding is applied to.
n_inner (`int`, *optional*, defaults to None): n_inner (`int`, *optional*):
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd
activation_function (`str`, *optional*, defaults to `"gelu_new"`): activation_function (`str`, *optional*, defaults to `"gelu_new"`):
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
resid_pdrop (`float`, *optional*, defaults to 0.1): resid_pdrop (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (`int`, *optional*, defaults to 0.1): embd_pdrop (`int`, *optional*, defaults to 0.0):
The dropout ratio for the embeddings. The dropout ratio for the embeddings.
attn_pdrop (`float`, *optional*, defaults to 0.1): attn_pdrop (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention. The dropout ratio for the attention.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon to use in the layer normalization layers. The epsilon to use in the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02): initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`): use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Whether or not the model should return the last key/values attentions (not used by all models).
bos_token_id (`int`, *optional*, defaults to 50256):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 50256):
End of stream token id.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
model has a output word embedding layer.
Example: Example:
......
...@@ -133,16 +133,20 @@ class CodeGenTokenizer(PreTrainedTokenizer): ...@@ -133,16 +133,20 @@ class CodeGenTokenizer(PreTrainedTokenizer):
errors (`str`, *optional*, defaults to `"replace"`): errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
unk_token (`str`, *optional*, defaults to `<|endoftext|>`): unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. token instead.
bos_token (`str`, *optional*, defaults to `<|endoftext|>`): bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The beginning of sequence token. The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `<|endoftext|>`): eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The end of sequence token. The end of sequence token.
pad_token (`str`, *optional*):
The token used for padding, for example when batching sequences of different lengths.
add_prefix_space (`bool`, *optional*, defaults to `False`): add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (CodeGen tokenizer detect beginning of words by the preceding space). other word. (CodeGen tokenizer detect beginning of words by the preceding space).
add_bos_token (`bool`, *optional*, defaults to `False`):
Whether to add a beginning of sequence token at the start of sequences.
""" """
vocab_files_names = VOCAB_FILES_NAMES vocab_files_names = VOCAB_FILES_NAMES
......
...@@ -92,25 +92,23 @@ class CodeGenTokenizerFast(PreTrainedTokenizerFast): ...@@ -92,25 +92,23 @@ class CodeGenTokenizerFast(PreTrainedTokenizerFast):
refer to this superclass for more information regarding those methods. refer to this superclass for more information regarding those methods.
Args: Args:
vocab_file (`str`): vocab_file (`str`, *optional*):
Path to the vocabulary file. Path to the vocabulary file.
merges_file (`str`): merges_file (`str`, *optional*):
Path to the merges file. Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`): tokenizer_file (`str`, *optional*):
Paradigm to follow when decoding bytes to UTF-8. See Path to [tokenizers](https://github.com/huggingface/tokenizers) file (generally has a .json extension) that
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. contains everything needed to load the tokenizer.
unk_token (`str`, *optional*, defaults to `<|endoftext|>`): unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. token instead.
bos_token (`str`, *optional*, defaults to `<|endoftext|>`): bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The beginning of sequence token. The beginning of sequence token.
eos_token (`str`, *optional*, defaults to `<|endoftext|>`): eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
The end of sequence token. The end of sequence token.
add_prefix_space (`bool`, *optional*, defaults to `False`): add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (CodeGen tokenizer detect beginning of words by the preceding space). other word. (CodeGen tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
""" """
vocab_files_names = VOCAB_FILES_NAMES vocab_files_names = VOCAB_FILES_NAMES
......
...@@ -123,9 +123,6 @@ OBJECTS_TO_IGNORE = [ ...@@ -123,9 +123,6 @@ OBJECTS_TO_IGNORE = [
"CanineTokenizer", "CanineTokenizer",
"ChineseCLIPTextModel", "ChineseCLIPTextModel",
"ClapTextConfig", "ClapTextConfig",
"CodeGenConfig",
"CodeGenTokenizer",
"CodeGenTokenizerFast",
"ConditionalDetrConfig", "ConditionalDetrConfig",
"ConditionalDetrImageProcessor", "ConditionalDetrImageProcessor",
"ConvBertConfig", "ConvBertConfig",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment