"git@developer.sourcefind.cn:OpenDAS/apex.git" did not exist on "2e7d799f9cc5b8544c9ef07330c4cd7eacd894ee"
Unverified Commit 08b3eac2 authored by Nicholas Broad's avatar Nicholas Broad Committed by GitHub
Browse files

single char ` addition for docs (#1989)

# What does this PR do?

I think this will fix the docs from being weirdly formatted. All the
sections after MAX_TOP_N_TOKENS don't show up in the bar on the right
(https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#maxtopntokens)


## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation

).
- [ ] Did you write any new necessary tests?


## Who can review?

@merveenoyan

---------
Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
parent 5ab4cef6
...@@ -125,7 +125,7 @@ Options: ...@@ -125,7 +125,7 @@ Options:
## MAX_TOP_N_TOKENS ## MAX_TOP_N_TOKENS
```shell ```shell
--max-top-n-tokens <MAX_TOP_N_TOKENS> --max-top-n-tokens <MAX_TOP_N_TOKENS>
This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens` is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking
[env: MAX_TOP_N_TOKENS=] [env: MAX_TOP_N_TOKENS=]
[default: 5] [default: 5]
......
...@@ -236,7 +236,7 @@ struct Args { ...@@ -236,7 +236,7 @@ struct Args {
max_stop_sequences: usize, max_stop_sequences: usize,
/// This is the maximum allowed value for clients to set `top_n_tokens`. /// This is the maximum allowed value for clients to set `top_n_tokens`.
/// `top_n_tokens is used to return information about the the `n` most likely /// `top_n_tokens` is used to return information about the the `n` most likely
/// tokens at each generation step, instead of just the sampled token. This /// tokens at each generation step, instead of just the sampled token. This
/// information can be used for downstream tasks like for classification or /// information can be used for downstream tasks like for classification or
/// ranking. /// ranking.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment