Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
08b41b41
Unverified
Commit
08b41b41
authored
Jan 20, 2022
by
Kamal Raj
Committed by
GitHub
Jan 20, 2022
Browse files
Update pipelines.mdx (#15243)
fix few spelling mistakes
parent
85ea462c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
4 deletions
+4
-4
docs/source/main_classes/pipelines.mdx
docs/source/main_classes/pipelines.mdx
+4
-4
No files found.
docs/source/main_classes/pipelines.mdx
View file @
08b41b41
...
...
@@ -254,7 +254,7 @@ For users, a rule of thumb is:
## Pipeline chunk batching
`zero-shot-classification` and `question-answering` are slightly specific in the sense, that a single input might yield
mu
t
liple forward pass of a model. Under normal circumstances, this would yield issues with `batch_size` argument.
mul
t
iple forward pass of a model. Under normal circumstances, this would yield issues with `batch_size` argument.
In order to circumvent this issue, both of these pipelines are a bit specific, they are `ChunkPipeline` instead of
regular `Pipeline`. In short:
...
...
@@ -263,7 +263,7 @@ regular `Pipeline`. In short:
```python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_ouputs)
outputs = pipe.postprocess(model_ou
t
puts)
```
Now becomes:
...
...
@@ -274,7 +274,7 @@ all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_ouputs)
outputs = pipe.postprocess(all_model_ou
t
puts)
```
This should be very transparent to your code because the pipelines are used in
...
...
@@ -282,7 +282,7 @@ the same way.
This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don't have to care
about how many forward passes you inputs are actually going to trigger, you can optimize the `batch_size`
independ
a
ntly of the inputs. The caveats from the previous section still apply.
independ
e
ntly of the inputs. The caveats from the previous section still apply.
## Pipeline custom code
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment