Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
3df3b9d4
Unverified
Commit
3df3b9d4
authored
Jul 05, 2023
by
Rafael Padilla
Committed by
GitHub
Jul 05, 2023
Browse files
Fix model referenced and results in documentation. Model mentioned was inaccessible (#24609)
parent
050ef145
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
34 additions
and
34 deletions
+34
-34
docs/source/en/tasks/object_detection.md
docs/source/en/tasks/object_detection.md
+17
-17
docs/source/ko/tasks/object_detection.md
docs/source/ko/tasks/object_detection.md
+17
-17
No files found.
docs/source/en/tasks/object_detection.md
View file @
3df3b9d4
...
@@ -481,7 +481,7 @@ Next, prepare an instance of a `CocoDetection` class that can be used with `coco
...
@@ -481,7 +481,7 @@ Next, prepare an instance of a `CocoDetection` class that can be used with `coco
...
return
{
"pixel_values"
:
pixel_values
,
"labels"
:
target
}
...
return
{
"pixel_values"
:
pixel_values
,
"labels"
:
target
}
>>>
im_processor
=
AutoImageProcessor
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
im_processor
=
AutoImageProcessor
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
path_output_cppe5
,
path_anno
=
save_cppe5_annotation_file_images
(
cppe5
[
"test"
])
>>>
path_output_cppe5
,
path_anno
=
save_cppe5_annotation_file_images
(
cppe5
[
"test"
])
>>>
test_ds_coco_format
=
CocoDetection
(
path_output_cppe5
,
im_processor
,
path_anno
)
>>>
test_ds_coco_format
=
CocoDetection
(
path_output_cppe5
,
im_processor
,
path_anno
)
...
@@ -493,7 +493,7 @@ Finally, load the metrics and run the evaluation.
...
@@ -493,7 +493,7 @@ Finally, load the metrics and run the evaluation.
>>>
import
evaluate
>>>
import
evaluate
>>>
from
tqdm
import
tqdm
>>>
from
tqdm
import
tqdm
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
module
=
evaluate
.
load
(
"ybelkada/cocoevaluate"
,
coco
=
test_ds_coco_format
.
coco
)
>>>
module
=
evaluate
.
load
(
"ybelkada/cocoevaluate"
,
coco
=
test_ds_coco_format
.
coco
)
>>>
val_dataloader
=
torch
.
utils
.
data
.
DataLoader
(
>>>
val_dataloader
=
torch
.
utils
.
data
.
DataLoader
(
...
test_ds_coco_format
,
batch_size
=
8
,
shuffle
=
False
,
num_workers
=
4
,
collate_fn
=
collate_fn
...
test_ds_coco_format
,
batch_size
=
8
,
shuffle
=
False
,
num_workers
=
4
,
collate_fn
=
collate_fn
...
@@ -522,18 +522,18 @@ Finally, load the metrics and run the evaluation.
...
@@ -522,18 +522,18 @@ Finally, load the metrics and run the evaluation.
Accumulating
evaluation
results
...
Accumulating
evaluation
results
...
DONE
(
t
=
0.08
s
).
DONE
(
t
=
0.08
s
).
IoU
metric
:
bbox
IoU
metric
:
bbox
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
150
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
352
Average
Precision
(
AP
)
@
[
IoU
=
0.50
|
area
=
all
|
maxDets
=
100
]
=
0.
280
Average
Precision
(
AP
)
@
[
IoU
=
0.50
|
area
=
all
|
maxDets
=
100
]
=
0.
681
Average
Precision
(
AP
)
@
[
IoU
=
0.75
|
area
=
all
|
maxDets
=
100
]
=
0.
130
Average
Precision
(
AP
)
@
[
IoU
=
0.75
|
area
=
all
|
maxDets
=
100
]
=
0.
292
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.
03
8
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.
16
8
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
036
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
208
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
182
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
429
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
1
]
=
0.
166
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
1
]
=
0.
274
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
10
]
=
0.
317
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
10
]
=
0.
484
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
335
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
501
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.1
04
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.1
91
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
146
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
323
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
382
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
590
```
```
These results can be further improved by adjusting the hyperparameters in [
`~transformers.TrainingArguments`
]. Give it a go!
These results can be further improved by adjusting the hyperparameters in [
`~transformers.TrainingArguments`
]. Give it a go!
...
@@ -549,15 +549,15 @@ for object detection with your model, and pass an image to it:
...
@@ -549,15 +549,15 @@ for object detection with your model, and pass an image to it:
>>>
url
=
"https://i.imgur.com/2lnWoly.jpg"
>>>
url
=
"https://i.imgur.com/2lnWoly.jpg"
>>>
image
=
Image
.
open
(
requests
.
get
(
url
,
stream
=
True
).
raw
)
>>>
image
=
Image
.
open
(
requests
.
get
(
url
,
stream
=
True
).
raw
)
>>>
obj_detector
=
pipeline
(
"object-detection"
,
model
=
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
obj_detector
=
pipeline
(
"object-detection"
,
model
=
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
obj_detector
(
image
)
>>>
obj_detector
(
image
)
```
```
You can also manually replicate the results of the pipeline if you'd like:
You can also manually replicate the results of the pipeline if you'd like:
```
py
```
py
>>>
image_processor
=
AutoImageProcessor
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
image_processor
=
AutoImageProcessor
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
with
torch
.
no_grad
():
>>>
with
torch
.
no_grad
():
...
inputs
=
image_processor
(
images
=
image
,
return_tensors
=
"pt"
)
...
inputs
=
image_processor
(
images
=
image
,
return_tensors
=
"pt"
)
...
...
docs/source/ko/tasks/object_detection.md
View file @
3df3b9d4
...
@@ -473,7 +473,7 @@ COCO 데이터 세트를 빌드하는 API는 데이터를 특정 형식으로
...
@@ -473,7 +473,7 @@ COCO 데이터 세트를 빌드하는 API는 데이터를 특정 형식으로
...
return
{
"pixel_values"
:
pixel_values
,
"labels"
:
target
}
...
return
{
"pixel_values"
:
pixel_values
,
"labels"
:
target
}
>>>
im_processor
=
AutoImageProcessor
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
im_processor
=
AutoImageProcessor
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
path_output_cppe5
,
path_anno
=
save_cppe5_annotation_file_images
(
cppe5
[
"test"
])
>>>
path_output_cppe5
,
path_anno
=
save_cppe5_annotation_file_images
(
cppe5
[
"test"
])
>>>
test_ds_coco_format
=
CocoDetection
(
path_output_cppe5
,
im_processor
,
path_anno
)
>>>
test_ds_coco_format
=
CocoDetection
(
path_output_cppe5
,
im_processor
,
path_anno
)
...
@@ -485,7 +485,7 @@ COCO 데이터 세트를 빌드하는 API는 데이터를 특정 형식으로
...
@@ -485,7 +485,7 @@ COCO 데이터 세트를 빌드하는 API는 데이터를 특정 형식으로
>>>
import
evaluate
>>>
import
evaluate
>>>
from
tqdm
import
tqdm
>>>
from
tqdm
import
tqdm
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
module
=
evaluate
.
load
(
"ybelkada/cocoevaluate"
,
coco
=
test_ds_coco_format
.
coco
)
>>>
module
=
evaluate
.
load
(
"ybelkada/cocoevaluate"
,
coco
=
test_ds_coco_format
.
coco
)
>>>
val_dataloader
=
torch
.
utils
.
data
.
DataLoader
(
>>>
val_dataloader
=
torch
.
utils
.
data
.
DataLoader
(
...
test_ds_coco_format
,
batch_size
=
8
,
shuffle
=
False
,
num_workers
=
4
,
collate_fn
=
collate_fn
...
test_ds_coco_format
,
batch_size
=
8
,
shuffle
=
False
,
num_workers
=
4
,
collate_fn
=
collate_fn
...
@@ -514,18 +514,18 @@ COCO 데이터 세트를 빌드하는 API는 데이터를 특정 형식으로
...
@@ -514,18 +514,18 @@ COCO 데이터 세트를 빌드하는 API는 데이터를 특정 형식으로
Accumulating
evaluation
results
...
Accumulating
evaluation
results
...
DONE
(
t
=
0.08
s
).
DONE
(
t
=
0.08
s
).
IoU
metric
:
bbox
IoU
metric
:
bbox
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
150
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
352
Average
Precision
(
AP
)
@
[
IoU
=
0.50
|
area
=
all
|
maxDets
=
100
]
=
0.
280
Average
Precision
(
AP
)
@
[
IoU
=
0.50
|
area
=
all
|
maxDets
=
100
]
=
0.
681
Average
Precision
(
AP
)
@
[
IoU
=
0.75
|
area
=
all
|
maxDets
=
100
]
=
0.
130
Average
Precision
(
AP
)
@
[
IoU
=
0.75
|
area
=
all
|
maxDets
=
100
]
=
0.
292
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.
03
8
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.
16
8
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
036
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
208
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
182
Average
Precision
(
AP
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
429
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
1
]
=
0.
166
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
1
]
=
0.
274
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
10
]
=
0.
317
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
10
]
=
0.
484
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
335
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
all
|
maxDets
=
100
]
=
0.
501
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.1
04
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
small
|
maxDets
=
100
]
=
0.1
91
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
146
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
medium
|
maxDets
=
100
]
=
0.
323
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
382
Average
Recall
(
AR
)
@
[
IoU
=
0.50
:
0.95
|
area
=
large
|
maxDets
=
100
]
=
0.
590
```
```
이러한 결과는 [
`~transformers.TrainingArguments`
]의 하이퍼파라미터를 조정하여 더욱 개선될 수 있습니다. 한번 시도해 보세요!
이러한 결과는 [
`~transformers.TrainingArguments`
]의 하이퍼파라미터를 조정하여 더욱 개선될 수 있습니다. 한번 시도해 보세요!
...
@@ -544,15 +544,15 @@ DETR 모델을 미세 조정 및 평가하고, 허깅페이스 허브에 업로
...
@@ -544,15 +544,15 @@ DETR 모델을 미세 조정 및 평가하고, 허깅페이스 허브에 업로
>>>
url
=
"https://i.imgur.com/2lnWoly.jpg"
>>>
url
=
"https://i.imgur.com/2lnWoly.jpg"
>>>
image
=
Image
.
open
(
requests
.
get
(
url
,
stream
=
True
).
raw
)
>>>
image
=
Image
.
open
(
requests
.
get
(
url
,
stream
=
True
).
raw
)
>>>
obj_detector
=
pipeline
(
"object-detection"
,
model
=
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
obj_detector
=
pipeline
(
"object-detection"
,
model
=
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
obj_detector
(
image
)
>>>
obj_detector
(
image
)
```
```
만약 원한다면 수동으로
`pipeline`
의 결과를 재현할 수 있습니다:
만약 원한다면 수동으로
`pipeline`
의 결과를 재현할 수 있습니다:
```
py
```
py
>>>
image_processor
=
AutoImageProcessor
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
image_processor
=
AutoImageProcessor
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
MariaK
/detr-resnet-50_finetuned_cppe5"
)
>>>
model
=
AutoModelForObjectDetection
.
from_pretrained
(
"
devonho
/detr-resnet-50_finetuned_cppe5"
)
>>>
with
torch
.
no_grad
():
>>>
with
torch
.
no_grad
():
...
inputs
=
image_processor
(
images
=
image
,
return_tensors
=
"pt"
)
...
inputs
=
image_processor
(
images
=
image
,
return_tensors
=
"pt"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment