Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
eec5a3a8
Unverified
Commit
eec5a3a8
authored
Oct 18, 2023
by
Rockerz
Committed by
GitHub
Oct 18, 2023
Browse files
Refactor code part in documentation translated to japanese (#26900)
Refactor code in documentation
parent
d933818d
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
18 additions
and
18 deletions
+18
-18
docs/source/ja/preprocessing.md
docs/source/ja/preprocessing.md
+18
-18
No files found.
docs/source/ja/preprocessing.md
View file @
eec5a3a8
...
@@ -64,8 +64,8 @@ pip install datasets
...
@@ -64,8 +64,8 @@ pip install datasets
次に、テキストをトークナイザに渡します:
次に、テキストをトークナイザに渡します:
```
py
thon
```
py
>>>
encoded_input
=
tokenizer
(
"
魔法使いの事には干渉しないでください、彼らは微妙で怒りっぽいです。
"
)
>>>
encoded_input
=
tokenizer
(
"
Do not meddle in the affairs of wizards, for they are subtle and quick to anger.
"
)
>>>
print
(
encoded_input
)
>>>
print
(
encoded_input
)
{
'input_ids'
:
[
101
,
2079
,
2025
,
19960
,
10362
,
1999
,
1996
,
3821
,
1997
,
16657
,
1010
,
2005
,
2027
,
2024
,
11259
,
1998
,
4248
,
2000
,
4963
,
1012
,
102
],
{
'input_ids'
:
[
101
,
2079
,
2025
,
19960
,
10362
,
1999
,
1996
,
3821
,
1997
,
16657
,
1010
,
2005
,
2027
,
2024
,
11259
,
1998
,
4248
,
2000
,
4963
,
1012
,
102
],
'token_type_ids'
:
[
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
],
'token_type_ids'
:
[
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
,
0
],
...
@@ -90,11 +90,11 @@ pip install datasets
...
@@ -90,11 +90,11 @@ pip install datasets
複数の文章を前処理する場合、トークナイザにリストとして渡してください:
複数の文章を前処理する場合、トークナイザにリストとして渡してください:
```
py
thon
```
py
>>>
batch_sentences
=
[
>>>
batch_sentences
=
[
...
"
でも、セカンドブレックファーストはどうなるの?
"
,
...
"
But what about second breakfast?
"
,
...
"
ピップ、セカンドブレックファーストのことを知っているかどうかはわからないと思うよ。
"
,
...
"
Don't think he knows about second breakfast, Pip.
"
,
...
"
イレブンジーズはどうなの?
"
,
...
"
What about elevensies?
"
,
...
]
...
]
>>>
encoded_inputs
=
tokenizer
(
batch_sentences
)
>>>
encoded_inputs
=
tokenizer
(
batch_sentences
)
>>>
print
(
encoded_inputs
)
>>>
print
(
encoded_inputs
)
...
@@ -116,11 +116,11 @@ pip install datasets
...
@@ -116,11 +116,11 @@ pip install datasets
バッチ内の短いシーケンスを最長のシーケンスに合わせるために、
`padding`
パラメータを
`True`
に設定します:
バッチ内の短いシーケンスを最長のシーケンスに合わせるために、
`padding`
パラメータを
`True`
に設定します:
```
py
thon
```
py
>>>
batch_sentences
=
[
>>>
batch_sentences
=
[
...
"
でもセカンドブレックファーストはどうなるの?
"
,
...
"
But what about second breakfast?
"
,
...
"
セカンドブレックファーストについては知らないと思う、ピップ。
"
,
...
"
Don't think he knows about second breakfast, Pip.
"
,
...
"
イレブンジーズはどうなの?
"
,
...
"
What about elevensies?
"
,
...
]
...
]
>>>
encoded_input
=
tokenizer
(
batch_sentences
,
padding
=
True
)
>>>
encoded_input
=
tokenizer
(
batch_sentences
,
padding
=
True
)
>>>
print
(
encoded_input
)
>>>
print
(
encoded_input
)
...
@@ -143,11 +143,11 @@ pip install datasets
...
@@ -143,11 +143,11 @@ pip install datasets
モデルが受け入れる最大の長さにシーケンスを切り詰めるには、
`truncation`
パラメータを
`True`
に設定します:
モデルが受け入れる最大の長さにシーケンスを切り詰めるには、
`truncation`
パラメータを
`True`
に設定します:
```
py
thon
```
py
>>>
batch_sentences
=
[
>>>
batch_sentences
=
[
...
"
でも、セカンドブレックファーストはどうなるの?
"
,
...
"
But what about second breakfast?
"
,
...
"
セカンドブレックファーストについては知らないと思う、ピップ。
"
,
...
"
Don't think he knows about second breakfast, Pip.
"
,
...
"
イレブンジーズはどうなの?
"
,
...
"
What about elevensies?
"
,
...
]
...
]
>>>
encoded_input
=
tokenizer
(
batch_sentences
,
padding
=
True
,
truncation
=
True
)
>>>
encoded_input
=
tokenizer
(
batch_sentences
,
padding
=
True
,
truncation
=
True
)
>>>
print
(
encoded_input
)
>>>
print
(
encoded_input
)
...
@@ -177,11 +177,11 @@ pip install datasets
...
@@ -177,11 +177,11 @@ pip install datasets
<frameworkcontent>
<frameworkcontent>
<pt>
<pt>
```
py
thon
```
py
>>>
batch_sentences
=
[
>>>
batch_sentences
=
[
...
"
でも、セカンドブレックファーストはどうなるの?
"
,
...
"
But what about second breakfast?
"
,
...
"
ピップ、セカンドブレックファーストについては知っていないと思うよ。
"
,
...
"
Don't think he knows about second breakfast, Pip.
"
,
...
"
イレブンジーズはどうなの?
"
,
...
"
What about elevensies?
"
,
...
]
...
]
>>>
encoded_input
=
tokenizer
(
batch_sentences
,
padding
=
True
,
truncation
=
True
,
return_tensors
=
"pt"
)
>>>
encoded_input
=
tokenizer
(
batch_sentences
,
padding
=
True
,
truncation
=
True
,
return_tensors
=
"pt"
)
>>>
print
(
encoded_input
)
>>>
print
(
encoded_input
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment