Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
9b468a7c
Unverified
Commit
9b468a7c
authored
Jan 19, 2023
by
Matthijs Hollemans
Committed by
GitHub
Jan 19, 2023
Browse files
workaround documentation rendering bug (#21189)
parent
464c86ac
Changes
20
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
33 additions
and
33 deletions
+33
-33
src/transformers/feature_extraction_sequence_utils.py
src/transformers/feature_extraction_sequence_utils.py
+3
-3
src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py
...transformers/models/layoutlmv2/tokenization_layoutlmv2.py
+2
-2
src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py
...formers/models/layoutlmv2/tokenization_layoutlmv2_fast.py
+1
-1
src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py
...transformers/models/layoutlmv3/tokenization_layoutlmv3.py
+3
-3
src/transformers/models/layoutlmv3/tokenization_layoutlmv3_fast.py
...formers/models/layoutlmv3/tokenization_layoutlmv3_fast.py
+1
-1
src/transformers/models/layoutxlm/tokenization_layoutxlm.py
src/transformers/models/layoutxlm/tokenization_layoutxlm.py
+2
-2
src/transformers/models/layoutxlm/tokenization_layoutxlm_fast.py
...nsformers/models/layoutxlm/tokenization_layoutxlm_fast.py
+2
-2
src/transformers/models/luke/tokenization_luke.py
src/transformers/models/luke/tokenization_luke.py
+2
-2
src/transformers/models/markuplm/tokenization_markuplm.py
src/transformers/models/markuplm/tokenization_markuplm.py
+2
-2
src/transformers/models/markuplm/tokenization_markuplm_fast.py
...ransformers/models/markuplm/tokenization_markuplm_fast.py
+1
-1
src/transformers/models/mctct/feature_extraction_mctct.py
src/transformers/models/mctct/feature_extraction_mctct.py
+1
-1
src/transformers/models/mluke/tokenization_mluke.py
src/transformers/models/mluke/tokenization_mluke.py
+2
-2
src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py
...odels/speech_to_text/feature_extraction_speech_to_text.py
+1
-1
src/transformers/models/tapas/tokenization_tapas.py
src/transformers/models/tapas/tokenization_tapas.py
+2
-2
src/transformers/models/tapex/tokenization_tapex.py
src/transformers/models/tapex/tokenization_tapex.py
+1
-1
src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py
...ansformers/models/wav2vec2/feature_extraction_wav2vec2.py
+1
-1
src/transformers/models/wav2vec2/tokenization_wav2vec2.py
src/transformers/models/wav2vec2/tokenization_wav2vec2.py
+1
-1
src/transformers/models/whisper/feature_extraction_whisper.py
...transformers/models/whisper/feature_extraction_whisper.py
+1
-1
src/transformers/tokenization_utils_base.py
src/transformers/tokenization_utils_base.py
+3
-3
src/transformers/tokenization_utils_fast.py
src/transformers/tokenization_utils_fast.py
+1
-1
No files found.
src/transformers/feature_extraction_sequence_utils.py
View file @
9b468a7c
...
...
@@ -107,7 +107,7 @@ class SequenceFeatureExtractor(FeatureExtractionMixin):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor's default.
...
...
@@ -250,7 +250,7 @@ class SequenceFeatureExtractor(FeatureExtractionMixin):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
@@ -309,7 +309,7 @@ class SequenceFeatureExtractor(FeatureExtractionMixin):
max_length: maximum length of the returned list and optionally padding length (see below)
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
truncation:
(optional) Activates truncation to cut input sequences longer than `max_length` to `max_length`.
"""
...
...
src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py
View file @
9b468a7c
...
...
@@ -100,7 +100,7 @@ LAYOUTLMV2_ENCODE_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~file_utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -1288,7 +1288,7 @@ class LayoutLMv2Tokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py
View file @
9b468a7c
...
...
@@ -710,7 +710,7 @@ class LayoutLMv2TokenizerFast(PreTrainedTokenizerFast):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py
View file @
9b468a7c
...
...
@@ -97,7 +97,7 @@ LAYOUTLMV3_ENCODE_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~file_utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -146,7 +146,7 @@ LAYOUTLMV3_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -1420,7 +1420,7 @@ class LayoutLMv3Tokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/layoutlmv3/tokenization_layoutlmv3_fast.py
View file @
9b468a7c
...
...
@@ -762,7 +762,7 @@ class LayoutLMv3TokenizerFast(PreTrainedTokenizerFast):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/layoutxlm/tokenization_layoutxlm.py
View file @
9b468a7c
...
...
@@ -82,7 +82,7 @@ LAYOUTXLM_ENCODE_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~file_utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -1118,7 +1118,7 @@ class LayoutXLMTokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/layoutxlm/tokenization_layoutxlm_fast.py
View file @
9b468a7c
...
...
@@ -85,7 +85,7 @@ LAYOUTXLM_ENCODE_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~file_utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -674,7 +674,7 @@ class LayoutXLMTokenizerFast(PreTrainedTokenizerFast):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/luke/tokenization_luke.py
View file @
9b468a7c
...
...
@@ -1440,7 +1440,7 @@ class LukeTokenizer(PreTrainedTokenizer):
The maximum length of the entity sequence.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the `return_outputs` attribute. [What are attention
...
...
@@ -1584,7 +1584,7 @@ class LukeTokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/markuplm/tokenization_markuplm.py
View file @
9b468a7c
...
...
@@ -96,7 +96,7 @@ MARKUPLM_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~file_utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -1392,7 +1392,7 @@ class MarkupLMTokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/markuplm/tokenization_markuplm_fast.py
View file @
9b468a7c
...
...
@@ -806,7 +806,7 @@ class MarkupLMTokenizerFast(PreTrainedTokenizerFast):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/mctct/feature_extraction_mctct.py
View file @
9b468a7c
...
...
@@ -275,7 +275,7 @@ class MCTCTFeatureExtractor(SequenceFeatureExtractor):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor's default.
...
...
src/transformers/models/mluke/tokenization_mluke.py
View file @
9b468a7c
...
...
@@ -1238,7 +1238,7 @@ class MLukeTokenizer(PreTrainedTokenizer):
The maximum length of the entity sequence.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the `return_outputs` attribute. [What are attention
...
...
@@ -1383,7 +1383,7 @@ class MLukeTokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py
View file @
9b468a7c
...
...
@@ -160,7 +160,7 @@ class Speech2TextFeatureExtractor(SequenceFeatureExtractor):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor's default.
...
...
src/transformers/models/tapas/tokenization_tapas.py
View file @
9b468a7c
...
...
@@ -223,7 +223,7 @@ TAPAS_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r"""
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -1852,7 +1852,7 @@ class TapasTokenizer(PreTrainedTokenizer):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/models/tapex/tokenization_tapex.py
View file @
9b468a7c
...
...
@@ -106,7 +106,7 @@ TAPEX_ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r"""
argument defines the number of overlapping tokens.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~file_utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py
View file @
9b468a7c
...
...
@@ -136,7 +136,7 @@ class Wav2Vec2FeatureExtractor(SequenceFeatureExtractor):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor's default.
...
...
src/transformers/models/wav2vec2/tokenization_wav2vec2.py
View file @
9b468a7c
...
...
@@ -88,7 +88,7 @@ WAV2VEC2_KWARGS_DOCSTRING = r"""
length (like XLNet) truncation/padding to a maximum length will be deactivated.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
src/transformers/models/whisper/feature_extraction_whisper.py
View file @
9b468a7c
...
...
@@ -240,7 +240,7 @@ class WhisperFeatureExtractor(SequenceFeatureExtractor):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
`
>= 7.5
`
(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor's default.
...
...
src/transformers/tokenization_utils_base.py
View file @
9b468a7c
...
...
@@ -1343,7 +1343,7 @@ ENCODE_KWARGS_DOCSTRING = r"""
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
...
...
@@ -2902,7 +2902,7 @@ class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the `return_outputs` attribute.
...
...
@@ -3339,7 +3339,7 @@ class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta).
`
>= 7.5
`
(Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
src/transformers/tokenization_utils_fast.py
View file @
9b468a7c
...
...
@@ -346,7 +346,7 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
The stride to use when handling overflow.
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
the use of Tensor Cores on NVIDIA hardware with compute capability
`
>= 7.5
`
(Volta).
"""
_truncation
=
self
.
_tokenizer
.
truncation
_padding
=
self
.
_tokenizer
.
padding
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment