Unverified Commit fe3df9d5 authored by Klaus Hipp's avatar Klaus Hipp Committed by GitHub
Browse files

[Docs] Add language identifiers to fenced code blocks (#28955)

Add language identifiers to code blocks
parent c617f988
...@@ -64,7 +64,7 @@ GPUが重要な負荷の下でどのような温度を目指すべきかを正 ...@@ -64,7 +64,7 @@ GPUが重要な負荷の下でどのような温度を目指すべきかを正
複数のGPUを使用する場合、カードの相互接続方法はトータルのトレーニング時間に大きな影響を与える可能性があります。GPUが同じ物理ノードにある場合、次のように実行できます: 複数のGPUを使用する場合、カードの相互接続方法はトータルのトレーニング時間に大きな影響を与える可能性があります。GPUが同じ物理ノードにある場合、次のように実行できます:
``` ```bash
nvidia-smi topo -m nvidia-smi topo -m
``` ```
......
...@@ -42,7 +42,7 @@ model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda") ...@@ -42,7 +42,7 @@ model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda")
### Image Classification with ViT ### Image Classification with ViT
``` ```python
from PIL import Image from PIL import Image
import requests import requests
import numpy as np import numpy as np
......
...@@ -36,7 +36,7 @@ IPEXのリリースはPyTorchに従っており、pipを使用してインスト ...@@ -36,7 +36,7 @@ IPEXのリリースはPyTorchに従っており、pipを使用してインスト
| 1.11 | 1.11.200+cpu | | 1.11 | 1.11.200+cpu |
| 1.10 | 1.10.100+cpu | | 1.10 | 1.10.100+cpu |
``` ```bash
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
``` ```
......
...@@ -38,7 +38,7 @@ Wheelファイルは、以下のPythonバージョン用に利用可能です: ...@@ -38,7 +38,7 @@ Wheelファイルは、以下のPythonバージョン用に利用可能です:
| 1.11.0 | | √ | √ | √ | √ | | 1.11.0 | | √ | √ | √ | √ |
| 1.10.0 | √ | √ | √ | √ | | | 1.10.0 | √ | √ | √ | √ | |
``` ```bash
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
``` ```
...@@ -70,13 +70,13 @@ oneccl_bindings_for_pytorchはMPIツールセットと一緒にインストー ...@@ -70,13 +70,13 @@ oneccl_bindings_for_pytorchはMPIツールセットと一緒にインストー
for Intel® oneCCL >= 1.12.0 for Intel® oneCCL >= 1.12.0
``` ```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh source $oneccl_bindings_for_pytorch_path/env/setvars.sh
``` ```
for Intel® oneCCL whose version < 1.12.0 for Intel® oneCCL whose version < 1.12.0
``` ```bash
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh source $torch_ccl_path/env/setvars.sh
``` ```
......
...@@ -131,7 +131,7 @@ DPとDDPの他にも違いがありますが、この議論には関係ありま ...@@ -131,7 +131,7 @@ DPとDDPの他にも違いがありますが、この議論には関係ありま
`NCCL_P2P_DISABLE=1`を使用して、対応するベンチマークでNVLink機能を無効にしました。 `NCCL_P2P_DISABLE=1`を使用して、対応するベンチマークでNVLink機能を無効にしました。
``` ```bash
# DP # DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
......
...@@ -151,7 +151,7 @@ training_args = TrainingArguments(bf16=True, **default_args) ...@@ -151,7 +151,7 @@ training_args = TrainingArguments(bf16=True, **default_args)
アンペアハードウェアは、tf32という特別なデータ型を使用します。これは、fp32と同じ数値範囲(8ビット)を持っていますが、23ビットの精度ではなく、10ビットの精度(fp16と同じ)を持ち、合計で19ビットしか使用しません。これは通常のfp32トレーニングおよび推論コードを使用し、tf32サポートを有効にすることで、最大3倍のスループットの向上が得られる点で「魔法のよう」です。行う必要があるのは、次のコードを追加するだけです: アンペアハードウェアは、tf32という特別なデータ型を使用します。これは、fp32と同じ数値範囲(8ビット)を持っていますが、23ビットの精度ではなく、10ビットの精度(fp16と同じ)を持ち、合計で19ビットしか使用しません。これは通常のfp32トレーニングおよび推論コードを使用し、tf32サポートを有効にすることで、最大3倍のスループットの向上が得られる点で「魔法のよう」です。行う必要があるのは、次のコードを追加するだけです:
``` ```python
import torch import torch
torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True
......
...@@ -490,7 +490,7 @@ def compute_metrics(eval_pred): ...@@ -490,7 +490,7 @@ def compute_metrics(eval_pred):
次に、入力をモデルに渡し、`logits `を返します。 次に、入力をモデルに渡し、`logits `を返します。
``` ```py
>>> logits = run_inference(trained_model, sample_test_video["video"]) >>> logits = run_inference(trained_model, sample_test_video["video"])
``` ```
......
...@@ -373,7 +373,7 @@ Assistant: ...@@ -373,7 +373,7 @@ Assistant:
따라서 사용자 정의 `chat` 프롬프트 템플릿의 예제에서도 이 형식을 사용하는 것이 중요합니다. 따라서 사용자 정의 `chat` 프롬프트 템플릿의 예제에서도 이 형식을 사용하는 것이 중요합니다.
다음과 같이 인스턴스화 할 때 `chat` 템플릿을 덮어쓸 수 있습니다. 다음과 같이 인스턴스화 할 때 `chat` 템플릿을 덮어쓸 수 있습니다.
``` ```python
template = """ [...] """ template = """ [...] """
agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template) agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
......
...@@ -64,7 +64,7 @@ GPU가 과열될 때 정확한 적정 온도를 알기 어려우나, 아마도 + ...@@ -64,7 +64,7 @@ GPU가 과열될 때 정확한 적정 온도를 알기 어려우나, 아마도 +
다중 GPU를 사용하는 경우 GPU 간의 연결 방식은 전체 훈련 시간에 큰 영향을 미칠 수 있습니다. 만약 GPU가 동일한 물리적 노드에 있을 경우, 다음과 같이 확인할 수 있습니다: 다중 GPU를 사용하는 경우 GPU 간의 연결 방식은 전체 훈련 시간에 큰 영향을 미칠 수 있습니다. 만약 GPU가 동일한 물리적 노드에 있을 경우, 다음과 같이 확인할 수 있습니다:
``` ```bash
nvidia-smi topo -m nvidia-smi topo -m
``` ```
......
...@@ -36,7 +36,7 @@ IPEX 릴리스는 PyTorch를 따라갑니다. pip를 통해 설치하려면: ...@@ -36,7 +36,7 @@ IPEX 릴리스는 PyTorch를 따라갑니다. pip를 통해 설치하려면:
| 1.11 | 1.11.200+cpu | | 1.11 | 1.11.200+cpu |
| 1.10 | 1.10.100+cpu | | 1.10 | 1.10.100+cpu |
``` ```bash
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
``` ```
......
...@@ -37,7 +37,7 @@ rendered properly in your Markdown viewer. ...@@ -37,7 +37,7 @@ rendered properly in your Markdown viewer.
| 1.11.0 | | √ | √ | √ | √ | | 1.11.0 | | √ | √ | √ | √ |
| 1.10.0 | √ | √ | √ | √ | | | 1.10.0 | √ | √ | √ | √ | |
``` ```bash
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
``` ```
`{pytorch_version}`은 1.13.0과 같이 PyTorch 버전을 나타냅니다. `{pytorch_version}`은 1.13.0과 같이 PyTorch 버전을 나타냅니다.
...@@ -57,13 +57,13 @@ PyTorch 1.12.1은 oneccl_bindings_for_pytorch 1.12.10 버전과 함께 사용해 ...@@ -57,13 +57,13 @@ PyTorch 1.12.1은 oneccl_bindings_for_pytorch 1.12.10 버전과 함께 사용해
oneccl_bindings_for_pytorch는 MPI 도구 세트와 함께 설치됩니다. 사용하기 전에 환경을 소스로 지정해야 합니다. oneccl_bindings_for_pytorch는 MPI 도구 세트와 함께 설치됩니다. 사용하기 전에 환경을 소스로 지정해야 합니다.
Intel® oneCCL 버전 1.12.0 이상인 경우 Intel® oneCCL 버전 1.12.0 이상인 경우
``` ```bash
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
source $oneccl_bindings_for_pytorch_path/env/setvars.sh source $oneccl_bindings_for_pytorch_path/env/setvars.sh
``` ```
Intel® oneCCL 버전이 1.12.0 미만인 경우 Intel® oneCCL 버전이 1.12.0 미만인 경우
``` ```bash
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
source $torch_ccl_path/env/setvars.sh source $torch_ccl_path/env/setvars.sh
``` ```
......
...@@ -133,7 +133,7 @@ DP와 DDP 사이에는 다른 차이점이 있지만, 이 토론과는 관련이 ...@@ -133,7 +133,7 @@ DP와 DDP 사이에는 다른 차이점이 있지만, 이 토론과는 관련이
해당 벤치마크에서 `NCCL_P2P_DISABLE=1`을 사용하여 NVLink 기능을 비활성화했습니다. 해당 벤치마크에서 `NCCL_P2P_DISABLE=1`을 사용하여 NVLink 기능을 비활성화했습니다.
``` ```bash
# DP # DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
......
...@@ -485,7 +485,7 @@ def compute_metrics(eval_pred): ...@@ -485,7 +485,7 @@ def compute_metrics(eval_pred):
모델에 입력값을 넣고 `logits`을 반환받으세요: 모델에 입력값을 넣고 `logits`을 반환받으세요:
``` ```py
>>> logits = run_inference(trained_model, sample_test_video["video"]) >>> logits = run_inference(trained_model, sample_test_video["video"])
``` ```
......
...@@ -72,7 +72,7 @@ pip install 'transformers[tf-cpu]' ...@@ -72,7 +72,7 @@ pip install 'transformers[tf-cpu]'
M1 / ARM用户 M1 / ARM用户
在安装 TensorFlow 2.0 前,你需要安装以下库: 在安装 TensorFlow 2.0 前,你需要安装以下库:
``` ```bash
brew install cmake brew install cmake
brew install pkg-config brew install pkg-config
``` ```
......
...@@ -2048,7 +2048,7 @@ print(f"rank{rank}:\n in={text_in}\n out={text_out}") ...@@ -2048,7 +2048,7 @@ print(f"rank{rank}:\n in={text_in}\n out={text_out}")
``` ```
让我们保存它为 `t0.py`并运行: 让我们保存它为 `t0.py`并运行:
``` ```bash
$ deepspeed --num_gpus 2 t0.py $ deepspeed --num_gpus 2 t0.py
rank0: rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
...@@ -2074,13 +2074,13 @@ rank1: ...@@ -2074,13 +2074,13 @@ rank1:
要运行DeepSpeed测试,请至少运行以下命令: 要运行DeepSpeed测试,请至少运行以下命令:
``` ```bash
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
``` ```
如果你更改了任何模型或PyTorch示例代码,请同时运行多模型测试。以下将运行所有DeepSpeed测试: 如果你更改了任何模型或PyTorch示例代码,请同时运行多模型测试。以下将运行所有DeepSpeed测试:
``` ```bash
RUN_SLOW=1 pytest tests/deepspeed RUN_SLOW=1 pytest tests/deepspeed
``` ```
......
...@@ -64,7 +64,7 @@ rendered properly in your Markdown viewer. ...@@ -64,7 +64,7 @@ rendered properly in your Markdown viewer.
如果您使用多个GPU,则卡之间的互连方式可能会对总训练时间产生巨大影响。如果GPU位于同一物理节点上,您可以运行以下代码: 如果您使用多个GPU,则卡之间的互连方式可能会对总训练时间产生巨大影响。如果GPU位于同一物理节点上,您可以运行以下代码:
``` ```bash
nvidia-smi topo -m nvidia-smi topo -m
``` ```
......
...@@ -228,7 +228,7 @@ Contributions that implement this command for other distributed hardware setups ...@@ -228,7 +228,7 @@ Contributions that implement this command for other distributed hardware setups
When using `run_eval.py`, the following features can be useful: When using `run_eval.py`, the following features can be useful:
* if you running the script multiple times and want to make it easier to track what arguments produced that output, use `--dump-args`. Along with the results it will also dump any custom params that were passed to the script. For example if you used: `--num_beams 8 --early_stopping true`, the output will be: * if you running the script multiple times and want to make it easier to track what arguments produced that output, use `--dump-args`. Along with the results it will also dump any custom params that were passed to the script. For example if you used: `--num_beams 8 --early_stopping true`, the output will be:
``` ```json
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True} {'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True}
``` ```
...@@ -236,13 +236,13 @@ When using `run_eval.py`, the following features can be useful: ...@@ -236,13 +236,13 @@ When using `run_eval.py`, the following features can be useful:
If using `--dump-args --info`, the output will be: If using `--dump-args --info`, the output will be:
``` ```json
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': '2020-09-13 18:44:43'} {'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': '2020-09-13 18:44:43'}
``` ```
If using `--dump-args --info "pair:en-ru chkpt=best`, the output will be: If using `--dump-args --info "pair:en-ru chkpt=best`, the output will be:
``` ```json
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': 'pair=en-ru chkpt=best'} {'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': 'pair=en-ru chkpt=best'}
``` ```
......
...@@ -53,7 +53,7 @@ Coming soon! ...@@ -53,7 +53,7 @@ Coming soon!
Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete. Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.
For example here is how to truncate all three splits to just 50 samples each: For example here is how to truncate all three splits to just 50 samples each:
``` ```bash
examples/pytorch/token-classification/run_ner.py \ examples/pytorch/token-classification/run_ner.py \
--max_train_samples 50 \ --max_train_samples 50 \
--max_eval_samples 50 \ --max_eval_samples 50 \
...@@ -62,7 +62,7 @@ examples/pytorch/token-classification/run_ner.py \ ...@@ -62,7 +62,7 @@ examples/pytorch/token-classification/run_ner.py \
``` ```
Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a `-h` option, e.g.: Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a `-h` option, e.g.:
``` ```bash
examples/pytorch/token-classification/run_ner.py -h examples/pytorch/token-classification/run_ner.py -h
``` ```
......
...@@ -277,7 +277,7 @@ language or concept the adapter layers shall be trained. The adapter weights wil ...@@ -277,7 +277,7 @@ language or concept the adapter layers shall be trained. The adapter weights wil
accordingly be called `adapter.{<target_language}.safetensors`. accordingly be called `adapter.{<target_language}.safetensors`.
Let's run an example script. Make sure to be logged in so that your model can be directly uploaded to the Hub. Let's run an example script. Make sure to be logged in so that your model can be directly uploaded to the Hub.
``` ```bash
huggingface-cli login huggingface-cli login
``` ```
......
...@@ -20,7 +20,7 @@ This folder contains various research projects using 🤗 Transformers. They are ...@@ -20,7 +20,7 @@ This folder contains various research projects using 🤗 Transformers. They are
version of 🤗 Transformers that is indicated in the requirements file of each folder. Updating them to the most recent version of the library will require some work. version of 🤗 Transformers that is indicated in the requirements file of each folder. Updating them to the most recent version of the library will require some work.
To use any of them, just run the command To use any of them, just run the command
``` ```bash
pip install -r requirements.txt pip install -r requirements.txt
``` ```
inside the folder of your choice. inside the folder of your choice.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment