{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "eXkR4NjYhezg" }, "source": [ "# Run Qwen PPO with [verl](https://github.com/volcengine/verl)\n", "\n", "This tutorial provides a step-by-step guide to using verl for executing your RLHF pipeline. You can find our [github repo](https://github.com/volcengine/verl/) and [documentation](https://verl.readthedocs.io/en/latest/index.html) for mode details.\n", "\n", "This notebook is also published on the [Lightning Studio](https://lightning.ai/hlin-verl/studios/verl-getting-started) platform, which provides free GPU quota every month. Checkout the published notebook with pre-installed dependencies using a free L4 GPU [here](https://lightning.ai/hlin-verl/studios/verl-getting-started) (no credit card required).\n", "\n", "### You will learn:\n", "\n", "- How to install verl from scratch.\n", "- How to use existing scripts to run an RLHF pipeline with your own models and data." ] }, { "cell_type": "markdown", "metadata": { "id": "XSDNzNuQkJJh" }, "source": [ "# Dependency Installation\n", "\n", "If you are running on Lightning Studio using the published notebook, the dependencies are **already installed** and you can proceed to step \"**Load Pretrained Language Model**\"" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "gnfZyMm-3BNC", "outputId": "9e8e5116-5344-4c9b-bc34-e9e9be752ff5" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: pip in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (24.3.1)\n", "Requirement already satisfied: setuptools in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (75.8.0)\n", "Requirement already satisfied: wheel in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (0.45.1)\n", "Requirement already satisfied: torch==2.4.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (2.4.0)\n", "Requirement already satisfied: torchvision==0.19.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (0.19.0)\n", "Requirement already satisfied: filelock in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (3.16.1)\n", "Requirement already satisfied: typing-extensions>=4.8.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (4.12.2)\n", "Requirement already satisfied: sympy in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (1.13.3)\n", "Requirement already satisfied: networkx in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (3.4.2)\n", "Requirement already satisfied: jinja2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (3.1.4)\n", "Requirement already satisfied: fsspec in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (2024.9.0)\n", "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (12.1.105)\n", "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (12.1.105)\n", "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (12.1.105)\n", "Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (9.1.0.70)\n", "Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (12.1.3.1)\n", "Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (11.0.2.54)\n", "Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (10.3.2.106)\n", "Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (11.4.5.107)\n", "Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (12.1.0.106)\n", "Requirement already satisfied: nvidia-nccl-cu12==2.20.5 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (2.20.5)\n", "Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (12.1.105)\n", "Requirement already satisfied: triton==3.0.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0) (3.0.0)\n", "Requirement already satisfied: numpy in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torchvision==0.19.0) (1.26.4)\n", "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torchvision==0.19.0) (10.4.0)\n", "Requirement already satisfied: nvidia-nvjitlink-cu12 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch==2.4.0) (12.6.77)\n", "Requirement already satisfied: MarkupSafe>=2.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from jinja2->torch==2.4.0) (3.0.2)\n", "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from sympy->torch==2.4.0) (1.3.0)\n", "pytorch-lightning 2.4.0\n", "torch 2.4.0\n", "torchmetrics 1.3.1\n", "torchvision 0.19.0\n", "Requirement already satisfied: flash-attn in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (2.7.2.post1)\n", "Requirement already satisfied: torch in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from flash-attn) (2.4.0)\n", "Requirement already satisfied: einops in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages/einops-0.8.0-py3.10.egg (from flash-attn) (0.8.0)\n", "Requirement already satisfied: filelock in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (3.16.1)\n", "Requirement already satisfied: typing-extensions>=4.8.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (4.12.2)\n", "Requirement already satisfied: sympy in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (1.13.3)\n", "Requirement already satisfied: networkx in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (3.4.2)\n", "Requirement already satisfied: jinja2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (3.1.4)\n", "Requirement already satisfied: fsspec in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (2024.9.0)\n", "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (12.1.105)\n", "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (12.1.105)\n", "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (12.1.105)\n", "Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (9.1.0.70)\n", "Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (12.1.3.1)\n", "Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (11.0.2.54)\n", "Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (10.3.2.106)\n", "Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (11.4.5.107)\n", "Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (12.1.0.106)\n", "Requirement already satisfied: nvidia-nccl-cu12==2.20.5 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (2.20.5)\n", "Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (12.1.105)\n", "Requirement already satisfied: triton==3.0.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch->flash-attn) (3.0.0)\n", "Requirement already satisfied: nvidia-nvjitlink-cu12 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch->flash-attn) (12.6.77)\n", "Requirement already satisfied: MarkupSafe>=2.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from jinja2->torch->flash-attn) (3.0.2)\n", "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from sympy->torch->flash-attn) (1.3.0)\n" ] } ], "source": [ "!pip3 install --upgrade pip setuptools wheel\n", "!pip3 install torch==2.4.0 torchvision==0.19.0\n", "!pip3 list | grep torch\n", "!pip3 install flash-attn --no-build-isolation" ] }, { "cell_type": "markdown", "metadata": { "id": "HzV28CwOmruV" }, "source": [ "## Install and verify verl\n", "Now we're ready to install verl!" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0mtIn1VOk2E7", "outputId": "8a83156e-c3aa-4921-97e2-a9472f22ed9d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Obtaining file:///teamspace/studios/this_studio/verl_repo\n", " Installing build dependencies ... \u001b[?25ldone\n", "\u001b[?25h Checking if build backend supports build_editable ... \u001b[?25ldone\n", "\u001b[?25h Getting requirements to build editable ... \u001b[?25ldone\n", "\u001b[?25h Preparing editable metadata (pyproject.toml) ... \u001b[?25ldone\n", "\u001b[?25hRequirement already satisfied: accelerate in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (1.1.1)\n", "Requirement already satisfied: codetiming in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (1.4.0)\n", "Requirement already satisfied: datasets in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (3.1.0)\n", "Requirement already satisfied: dill in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (0.3.8)\n", "Requirement already satisfied: hydra-core in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (1.3.2)\n", "Requirement already satisfied: numpy in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (1.26.4)\n", "Requirement already satisfied: pybind11 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (2.13.6)\n", "Requirement already satisfied: ray in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (2.10.0)\n", "Requirement already satisfied: tensordict in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (0.5.0)\n", "Requirement already satisfied: transformers in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (4.46.3)\n", "Requirement already satisfied: vllm<=0.6.3 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from verl==0.1) (0.5.4)\n", "Requirement already satisfied: cmake>=3.21 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (3.31.1)\n", "Requirement already satisfied: ninja in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (1.11.1.2)\n", "Requirement already satisfied: psutil in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (6.1.0)\n", "Requirement already satisfied: sentencepiece in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.2.0)\n", "Requirement already satisfied: requests in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (2.32.3)\n", "Requirement already satisfied: tqdm in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (4.67.1)\n", "Requirement already satisfied: py-cpuinfo in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (9.0.0)\n", "Requirement already satisfied: tokenizers>=0.19.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.20.3)\n", "Requirement already satisfied: fastapi in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.115.4)\n", "Requirement already satisfied: aiohttp in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (3.10.10)\n", "Requirement already satisfied: openai in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (1.55.3)\n", "Requirement already satisfied: uvicorn[standard] in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.32.0)\n", "Requirement already satisfied: pydantic>=2.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (2.9.2)\n", "Requirement already satisfied: pillow in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (10.4.0)\n", "Requirement already satisfied: prometheus-client>=0.18.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.21.0)\n", "Requirement already satisfied: prometheus-fastapi-instrumentator>=7.0.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (7.0.0)\n", "Requirement already satisfied: tiktoken>=0.6.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.7.0)\n", "Requirement already satisfied: lm-format-enforcer==0.10.3 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.10.3)\n", "Requirement already satisfied: outlines<0.1,>=0.0.43 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.0.46)\n", "Requirement already satisfied: typing-extensions in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (4.12.2)\n", "Requirement already satisfied: filelock>=3.10.4 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (3.16.1)\n", "Requirement already satisfied: pyzmq in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (26.2.0)\n", "Requirement already satisfied: nvidia-ml-py in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (12.560.30)\n", "Requirement already satisfied: torch==2.4.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (2.4.0)\n", "Requirement already satisfied: torchvision==0.19 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.19.0)\n", "Requirement already satisfied: xformers==0.0.27.post2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (0.0.27.post2)\n", "Requirement already satisfied: vllm-flash-attn==2.6.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from vllm<=0.6.3->verl==0.1) (2.6.1)\n", "Requirement already satisfied: interegular>=0.3.2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from lm-format-enforcer==0.10.3->vllm<=0.6.3->verl==0.1) (0.3.3)\n", "Requirement already satisfied: packaging in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from lm-format-enforcer==0.10.3->vllm<=0.6.3->verl==0.1) (24.1)\n", "Requirement already satisfied: pyyaml in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from lm-format-enforcer==0.10.3->vllm<=0.6.3->verl==0.1) (6.0.2)\n", "Requirement already satisfied: sympy in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (1.13.3)\n", "Requirement already satisfied: networkx in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (3.4.2)\n", "Requirement already satisfied: jinja2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (3.1.4)\n", "Requirement already satisfied: fsspec in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (2024.9.0)\n", "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.1.105)\n", "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.1.105)\n", "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.1.105)\n", "Requirement already satisfied: nvidia-cudnn-cu12==9.1.0.70 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (9.1.0.70)\n", "Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.1.3.1)\n", "Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (11.0.2.54)\n", "Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (10.3.2.106)\n", "Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (11.4.5.107)\n", "Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.1.0.106)\n", "Requirement already satisfied: nvidia-nccl-cu12==2.20.5 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (2.20.5)\n", "Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.1.105)\n", "Requirement already satisfied: triton==3.0.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from torch==2.4.0->vllm<=0.6.3->verl==0.1) (3.0.0)\n", "Requirement already satisfied: nvidia-nvjitlink-cu12 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch==2.4.0->vllm<=0.6.3->verl==0.1) (12.6.77)\n", "Requirement already satisfied: click>=7.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from ray->verl==0.1) (8.1.7)\n", "Requirement already satisfied: jsonschema in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from ray->verl==0.1) (4.23.0)\n", "Requirement already satisfied: msgpack<2.0.0,>=1.0.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from ray->verl==0.1) (1.1.0)\n", "Requirement already satisfied: protobuf!=3.19.5,>=3.15.3 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from ray->verl==0.1) (4.23.4)\n", "Requirement already satisfied: aiosignal in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from ray->verl==0.1) (1.3.1)\n", "Requirement already satisfied: frozenlist in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from ray->verl==0.1) (1.5.0)\n", "Requirement already satisfied: huggingface-hub<1.0,>=0.23.2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from transformers->verl==0.1) (0.26.3)\n", "Requirement already satisfied: regex!=2019.12.17 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from transformers->verl==0.1) (2023.10.3)\n", "Requirement already satisfied: safetensors>=0.4.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from transformers->verl==0.1) (0.4.5)\n", "Requirement already satisfied: pyarrow>=15.0.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from datasets->verl==0.1) (18.1.0)\n", "Requirement already satisfied: pandas in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from datasets->verl==0.1) (2.1.4)\n", "Requirement already satisfied: xxhash in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from datasets->verl==0.1) (3.5.0)\n", "Requirement already satisfied: multiprocess<0.70.17 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from datasets->verl==0.1) (0.70.16)\n", "Requirement already satisfied: omegaconf<2.4,>=2.2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from hydra-core->verl==0.1) (2.3.0)\n", "Requirement already satisfied: antlr4-python3-runtime==4.9.* in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from hydra-core->verl==0.1) (4.9.3)\n", "Requirement already satisfied: cloudpickle in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from tensordict->verl==0.1) (3.1.0)\n", "Requirement already satisfied: orjson in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from tensordict->verl==0.1) (3.10.12)\n", "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from aiohttp->vllm<=0.6.3->verl==0.1) (2.4.3)\n", "Requirement already satisfied: attrs>=17.3.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from aiohttp->vllm<=0.6.3->verl==0.1) (24.2.0)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from aiohttp->vllm<=0.6.3->verl==0.1) (6.1.0)\n", "Requirement already satisfied: yarl<2.0,>=1.12.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from aiohttp->vllm<=0.6.3->verl==0.1) (1.17.1)\n", "Requirement already satisfied: async-timeout<5.0,>=4.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from aiohttp->vllm<=0.6.3->verl==0.1) (4.0.3)\n", "Requirement already satisfied: lark in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (1.2.2)\n", "Requirement already satisfied: nest-asyncio in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (1.6.0)\n", "Requirement already satisfied: diskcache in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (5.6.3)\n", "Requirement already satisfied: numba in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (0.60.0)\n", "Requirement already satisfied: referencing in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (0.35.1)\n", "Requirement already satisfied: pycountry in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (24.6.1)\n", "Requirement already satisfied: pyairports in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (2.1.1)\n", "Requirement already satisfied: starlette<1.0.0,>=0.30.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from prometheus-fastapi-instrumentator>=7.0.0->vllm<=0.6.3->verl==0.1) (0.41.2)\n", "Requirement already satisfied: annotated-types>=0.6.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pydantic>=2.0->vllm<=0.6.3->verl==0.1) (0.7.0)\n", "Requirement already satisfied: pydantic-core==2.23.4 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pydantic>=2.0->vllm<=0.6.3->verl==0.1) (2.23.4)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from requests->vllm<=0.6.3->verl==0.1) (3.4.0)\n", "Requirement already satisfied: idna<4,>=2.5 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from requests->vllm<=0.6.3->verl==0.1) (3.10)\n", "Requirement already satisfied: urllib3<3,>=1.21.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from requests->vllm<=0.6.3->verl==0.1) (2.2.3)\n", "Requirement already satisfied: certifi>=2017.4.17 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from requests->vllm<=0.6.3->verl==0.1) (2024.8.30)\n", "Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from jsonschema->ray->verl==0.1) (2024.10.1)\n", "Requirement already satisfied: rpds-py>=0.7.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from jsonschema->ray->verl==0.1) (0.20.1)\n", "Requirement already satisfied: anyio<5,>=3.5.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from openai->vllm<=0.6.3->verl==0.1) (4.6.2.post1)\n", "Requirement already satisfied: distro<2,>=1.7.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from openai->vllm<=0.6.3->verl==0.1) (1.9.0)\n", "Requirement already satisfied: httpx<1,>=0.23.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from openai->vllm<=0.6.3->verl==0.1) (0.27.2)\n", "Requirement already satisfied: jiter<1,>=0.4.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from openai->vllm<=0.6.3->verl==0.1) (0.8.0)\n", "Requirement already satisfied: sniffio in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from openai->vllm<=0.6.3->verl==0.1) (1.3.1)\n", "Requirement already satisfied: python-dateutil>=2.8.2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pandas->datasets->verl==0.1) (2.9.0.post0)\n", "Requirement already satisfied: pytz>=2020.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pandas->datasets->verl==0.1) (2024.2)\n", "Requirement already satisfied: tzdata>=2022.1 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from pandas->datasets->verl==0.1) (2024.2)\n", "Requirement already satisfied: h11>=0.8 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from uvicorn[standard]->vllm<=0.6.3->verl==0.1) (0.14.0)\n", "Requirement already satisfied: httptools>=0.5.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from uvicorn[standard]->vllm<=0.6.3->verl==0.1) (0.6.4)\n", "Requirement already satisfied: python-dotenv>=0.13 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from uvicorn[standard]->vllm<=0.6.3->verl==0.1) (1.0.1)\n", "Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from uvicorn[standard]->vllm<=0.6.3->verl==0.1) (0.21.0)\n", "Requirement already satisfied: watchfiles>=0.13 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from uvicorn[standard]->vllm<=0.6.3->verl==0.1) (0.24.0)\n", "Requirement already satisfied: websockets>=10.4 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from uvicorn[standard]->vllm<=0.6.3->verl==0.1) (13.1)\n", "Requirement already satisfied: exceptiongroup>=1.0.2 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai->vllm<=0.6.3->verl==0.1) (1.2.2)\n", "Requirement already satisfied: httpcore==1.* in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai->vllm<=0.6.3->verl==0.1) (1.0.6)\n", "Requirement already satisfied: six>=1.5 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from python-dateutil>=2.8.2->pandas->datasets->verl==0.1) (1.16.0)\n", "Requirement already satisfied: propcache>=0.2.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from yarl<2.0,>=1.12.0->aiohttp->vllm<=0.6.3->verl==0.1) (0.2.0)\n", "Requirement already satisfied: MarkupSafe>=2.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from jinja2->torch==2.4.0->vllm<=0.6.3->verl==0.1) (3.0.2)\n", "Requirement already satisfied: llvmlite<0.44,>=0.43.0dev0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from numba->outlines<0.1,>=0.0.43->vllm<=0.6.3->verl==0.1) (0.43.0)\n", "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /system/conda/miniconda3/envs/cloudspace/lib/python3.10/site-packages (from sympy->torch==2.4.0->vllm<=0.6.3->verl==0.1) (1.3.0)\n", "Building wheels for collected packages: verl\n", " Building editable for verl (pyproject.toml) ... \u001b[?25ldone\n", "\u001b[?25h Created wheel for verl: filename=verl-0.1-0.editable-py3-none-any.whl size=13000 sha256=8fd1f1241dfe89d7f8384fe884f50ec4e070d18029c37472e5584300f5a326de\n", " Stored in directory: /tmp/pip-ephem-wheel-cache-pz36kou4/wheels/f4/30/ea/7a2d2086bd780aba22048a0b415dc5e5a9e50b2c87e39e9717\n", "Successfully built verl\n", "Installing collected packages: verl\n", "Successfully installed verl-0.1\n" ] } ], "source": [ "# In case you run this notebook and have not cloned verl yet:\n", "# !git clone https://github.com/volcengine/verl $HOME/verl_repo\n", "\n", "!cd $HOME/verl_repo && pip3 install -e . -U" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Restart the python kernel" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'status': 'ok', 'restart': True}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import IPython\n", "\n", "# Restart the kernel to pickup the latest python packages\n", "IPython.get_ipython().kernel.do_shutdown(restart=True)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "mOBX8Jqc-ZBe" }, "outputs": [], "source": [ "import torch\n", "try:\n", " assert torch.cuda.is_available() is True\n", " torch.ones(1, dtype=torch.bfloat16).cuda()\n", "except AssertionError:\n", " print(\"Please switch to an env with GPUs supporting bfloat16 (L4 RTX 5000, A5000, A100, H100, A10, etc)\")\n", "\n", "try:\n", " import verl\n", "except Exception as e:\n", " print(\"Please install verl via pip and restart the kernel\")\n", " raise e\n", "\n", "import flash_attn" ] }, { "cell_type": "markdown", "metadata": { "id": "9mawNxDfo3Uu" }, "source": [ "# Load Pretrained Language Model\n", "\n", "verl supports models available in Huggingface transformers (as well as custom Megatron models).\n", "\n", "Let's download the model first." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 404, "referenced_widgets": [ "763679906f9248a7a5f4c8de952d98ae", "b319db13c64b43a38250342c81708f70", "5110a596739443a8a640cfd50030644b", "e93bf508749940909c3233904e898497", "b45a42483d64410ba245feda17ae3e16", "538e82daa19140098a4053da6e23de45", "6b14f827b15f4e34be6590a5d2085b64", "07f455e5e6dd45b7ba52f78bfc7ec7d6", "fdf06125a50249b8878dbf01993306f4", "0ab915aba5e14e5bba7ba1c22a682b89", "2749b87567ea4b6cbc4cf825e2282615", "645fee7bcccd42a794e4aa889c1fe145", "aa19071cede44a089d7f3b19227d51e0", "412349b6e00b4994bc3f63f8405b3ec2", "a921f9b0d3c74381b75aa60f4d1cac1c", "b707bf4c56744f05ac9245b07f6d1788", "252e651f687e47f3bd20518f2ac5fb9f", "835a6a0a56554d158ee40ccc5ccdffc5", "be14bccf9f114d9f839c805afef08f61", "52268a2d657b4e19badd66f781f68d93", "06873240926949d98e13872546c5231d", "936061cb57ad445195efc0aa24dd8d66", "144df34a87334a6d8eb13055e7a9b9e4", "1e9ee1c383074f638a688b029d72bc79", "5cfeadb8ff394f38ac2e23f1a66beeb3", "0eeef594fb564491ad8d80f86a8fbfdc", "771c5ca9460f4539b30f452dd3f36b12", "fab6aab315214fcb884529a4dbf84fe5", "06e1b9b5d49d4ee3ab8d1a523659bcbf", "e3848f0a11f8472fba3ecb624bc86dd9", "c7b67dd574ad4c15b36930047553e9d3", "9fbafd9fc26748b7889b5c52600f80a8", "889e01d618544f7c9a9d748730255007", "69e57962129241a689cfd2933b64127c", "4bdbe0a8bb434bfc8e2172ecb5189705", "b0bbbf7f9f264dfda2c0d6775567e446", "6c9485ecc56f4027ad8f3824554e3968", "3447ed64518746cabb0176348fc88d96", "35e124a16d2945ddbb3ade95ef2b5519", "7de86c10755f4e0da7974bdf1815a85d", "4957b3690466495997721afab68ad93a", "9e2c1dcd2cd643bbb941d6697fcc75a0", "b10402691cc3480693dcf49d19336c72", "f0350562775a4c4ca83772a78d05122b", "1a382959fdeb4554827397823284d2fa", "f52d7af1a82249a3aa7785476e10c2ad", "afcc65785fef4b71b03ac83a4b14d97f", "c0b19ca098a443598c662921832e8799", "ca24445f8af44c8397f12d15d66eebf5", "6cd310d2188d424eb20c3bf83ac34f56", "ddecda628c6a4a5680b4241633153ebd", "e49f1b46d8ae4c3e8f894b1f411922b9", "0c9b8ffe4b8c4b5ca72a21cc54a1feb9", "c3651669cb084d86b9b8c427c665d185", "35bacfb8aa4c4a25bf8ce2d13a00f2b8", "c1020ed4d8a44747838ed59287d284ed", "a726ef24d10c42bf859e4c76cebde672", "40259328dd5e4256939d7b1a3f038d98", "ee0b85738cbf4376a6427fadbdecfad7", "1491cbb53c6e4bfb9d17cf123dea83dd", "5c8c3c4d700540f089f671d4f5d0dd9f", "7c45a87d87f44b2384a4fd316ae36663", "866c770e39b64dfd9764de755f6a9ec5", "2babfcd555104f9d8ecf98e164ec40fc", "7920655156a44e629514673dde2b9663", "c24df93c305e42cdbaed3d6111d72010", "c0e97dba53284330b0fb8cefc852d552", "4d1a260957214732940766c874d3a02b", "89a180c90767474b8e699e264620666e", "7363ebea3a3a4f55b69b2d813c3b2fa5", "d49791321218419d8b7af314dd904777", "e6b66ca90c9c4b0ead5153e4a07cdc86", "3e1dd2fd3bb049ab83aa987d748f5b9e", "a1255e85757e495a86ae366857fb64f1", "6f3742161c4f4bcc891c82aff7ece69f", "e9f9be6fa1744f3380d21c451bc81555", "c5024f35870446a0ae8fd747101ab719" ] }, "id": "k8FsgBYnpR-R", "outputId": "57e0e9ae-2c9e-498d-849f-7eafe59c4c03" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fetching 10 files: 0%| | 0/10 [00:00>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20120 = <<7/20120=42>>42 #### 42" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "AgRCvb6V6B3A", "outputId": "f45c7f69-2f81-4b19-98de-8dc4b869736d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Map: 100%|████████████████████████| 7473/7473 [00:00<00:00, 24893.94 examples/s]\n", "Map: 100%|████████████████████████| 1319/1319 [00:00<00:00, 25219.21 examples/s]\n", "Creating parquet from Arrow format: 100%|████████| 8/8 [00:00<00:00, 257.77ba/s]\n", "Creating parquet from Arrow format: 100%|████████| 2/2 [00:00<00:00, 370.05ba/s]\n" ] } ], "source": [ "!mkdir -p $HOME/data/gsm8k\n", "!python3 $HOME/verl_repo/examples/data_preprocess/gsm8k.py --local_dir $HOME/data/gsm8k" ] }, { "cell_type": "markdown", "metadata": { "id": "JPZZKBxunAoj" }, "source": [ "# the reward\n", "\n", "We use a rule-based reward model. We force the model to produce a final answer following 4 `#` as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We compare them and assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "SjjLVuO60WD1", "outputId": "affb562c-7f6c-41f7-ef47-4dfea0020e90" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "def compute_score(solution_str, ground_truth, method='strict', format_score=0., score=1.):\n", " \"\"\"The scoring function for GSM8k.\n", "\n", " Reference: Trung, Luong, et al. \"Reft: Reasoning with reinforced fine-tuning.\" Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024.\n", "\n", " Args:\n", " solution_str: the solution text\n", " ground_truth: the ground truth\n", " method: the method to extract the solution, choices are 'strict' and 'flexible'\n", " format_score: the score for the format\n", " score: the score for the correct answer\n", " \"\"\"\n", " answer = extract_solution(solution_str=solution_str, method=method)\n", " if answer is None:\n", " return 0\n", " else:\n", " if answer == ground_truth:\n", " return score\n", " else:\n", " return format_score\n", "\n" ] } ], "source": [ "import inspect\n", "from verl.utils.reward_score.gsm8k import compute_score as gsm8k_reward\n", "\n", "print(inspect.getsource(gsm8k_reward))" ] }, { "cell_type": "markdown", "metadata": { "id": "NPBGPdSD0sCF" }, "source": [ "# Run the RL Pipeline\n", "Let's start with the Proximal Policy Optimization (PPO) algorithm, one of the most widely used methods for post-training large language models.\n", "\n", "The main entry point of the PPO algorithm example is: `main_ppo.py`. A detailed guide to understanding the code architecture of `main_ppo.py` is available [here](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html).\n", "\n", "In this tutorial, we will demonstrate how to run the PPO algorithm with **Qwen 2.5-0.5B** by setting:\n", "- `trainer.n_gpus_per_node`: Number of GPUs per node.\n", "\n", "- `actor_rollout_ref.rollout.tensor_model_parallel_size`: TP size for rollout. Only effective for vllm.\n", "\n", "- `actor_rollout_ref/critic.model.path`: Huggingface model path. This can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path.\n", "\n", "- `data.train_batch_size`: Batch size sampled for one training iteration of different RL algorithms.\n", "\n", "- `data.max_prompt_length`: Maximum prompt length. All prompts will be left-padded to this length. An error will be reported if the length is too long.\n", "\n", "- `data.max_response_length`: Maximum response length. Rollout in RL algorithms (e.g. PPO) generates up to this length.\n", "\n", "- `actor_rollout_ref.actor.ppo_mini_batch_size`: One sample is split into multiple sub-batches with batch_size=ppo_mini_batch_size for PPO updates.\n", "\n", "- `actor_rollout_ref/critic.actor.ppo_micro_batch_size`: Similar to gradient accumulation, the micro_batch_size for one forward pass, trading speed for GPU memory.\n", "\n", "The full configuration explanation is available [here](https://verl.readthedocs.io/en/latest/examples/config.html).\n", "\n", "The training may take a few hours to finish but you can observe how the model performance increases. It will progressively output:\n", "\n", "- generated sentences.\n", "\n", "- step information with RL metrics, such as entropy loss, kl, and ``val/test_score/openai/gsm8k`` (validated every ``trainer.test_freq`` steps)\n", "\n", "If you come across GPU out of memory issues, set smaller values for the micro batch size used for gradient accumulation:\n", "\n", "- actor_rollout_ref.actor.ppo_micro_batch_size=1\n", "- critic.ppo_micro_batch_size=1" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "GvyEebBB4eCA", "outputId": "a0bb8f75-6f79-456c-c71f-254c84503763" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2025-01-10 21:40:29,298\tINFO worker.py:1752 -- Started a local Ray instance.\n", "\u001b[36m(main_task pid=28294)\u001b[0m {'actor_rollout_ref': {'actor': {'clip_ratio': 0.2,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'entropy_coeff': 0.001,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'fsdp_config': {'grad_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'optimizer_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'param_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'wrap_policy': {'min_num_params': 0}},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'grad_clip': 1.0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'optim': {'lr': 1e-06,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'lr_warmup_steps_ratio': 0.0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'min_lr_ratio': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'total_training_steps': -1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'warmup_style': 'constant'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ppo_epochs': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ppo_micro_batch_size': 4,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ppo_mini_batch_size': 64,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'shuffle': True,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'strategy': 'fsdp'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'hybrid_engine': True,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'model': {'enable_gradient_checkpointing': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'external_lib': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'override_config': {},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'path': '/teamspace/studios/this_studio/models/Qwen2.5-0.5B-Instruct'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ref': {'fsdp_config': {'param_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'wrap_policy': {'min_num_params': 0}},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'log_prob_micro_batch_size_per_gpu': 4},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'rollout': {'do_sample': True,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'dtype': 'bfloat16',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'enforce_eager': True,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'free_cache_engine': True,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'gpu_memory_utilization': 0.4,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ignore_eos': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'load_format': 'dummy_dtensor',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'log_prob_micro_batch_size_per_gpu': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'max_num_batched_tokens': 8192,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'max_num_seqs': 1024,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'n': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'name': 'vllm',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'prompt_length': 512,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'response_length': 256,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'temperature': 1.0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'tensor_model_parallel_size': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'top_k': -1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'top_p': 1}},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'algorithm': {'adv_estimator': 'gae',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'gamma': 1.0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'kl_ctrl': {'kl_coef': 0.001, 'type': 'fixed'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'kl_penalty': 'kl',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'lam': 1.0},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'critic': {'cliprange_value': 0.5,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'forward_micro_batch_size_per_gpu': 4,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'grad_clip': 1.0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'model': {'enable_gradient_checkpointing': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'external_lib': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'fsdp_config': {'grad_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'optimizer_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'param_offload': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'wrap_policy': {'min_num_params': 0}},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'override_config': {},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'path': '/teamspace/studios/this_studio/models/Qwen2.5-0.5B-Instruct',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'tokenizer_path': '/teamspace/studios/this_studio/models/Qwen2.5-0.5B-Instruct'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'optim': {'lr': 1e-05,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'lr_warmup_steps_ratio': 0.0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'min_lr_ratio': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'total_training_steps': -1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'warmup_style': 'constant'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ppo_epochs': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ppo_micro_batch_size': 4,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'ppo_mini_batch_size': 64,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'shuffle': True,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'strategy': 'fsdp'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'data': {'max_prompt_length': 512,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'max_response_length': 256,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'prompt_key': 'prompt',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'return_raw_chat': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'return_raw_input_ids': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'tokenizer': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'train_batch_size': 256,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'train_files': '/teamspace/studios/this_studio/data/gsm8k/train.parquet',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'val_batch_size': 1312,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'val_files': '/teamspace/studios/this_studio/data/gsm8k/test.parquet'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'reward_model': {'enable': False,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'max_length': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'micro_batch_size': 64,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'model': {'external_lib': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'fsdp_config': {'min_num_params': 0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'param_offload': False},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'input_tokenizer': '/teamspace/studios/this_studio/models/Qwen2.5-0.5B-Instruct',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'path': '~/models/FsfairX-LLaMA3-RM-v0.1'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'strategy': 'fsdp'},\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'trainer': {'critic_warmup': 0,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'default_hdfs_dir': None,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'default_local_dir': 'checkpoints/verl_examples/gsm8k',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'experiment_name': 'gsm8k',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'logger': ['console'],\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'n_gpus_per_node': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'nnodes': 1,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'project_name': 'verl_examples',\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'save_freq': 10,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'test_freq': 10,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'total_epochs': 15,\n", "\u001b[36m(main_task pid=28294)\u001b[0m 'val_before_train': False}}\n", "\u001b[36m(main_task pid=28294)\u001b[0m original dataset len: 7473\n", "\u001b[36m(main_task pid=28294)\u001b[0m filter dataset len: 7473\n", "\u001b[36m(main_task pid=28294)\u001b[0m original dataset len: 1319\n", "\u001b[36m(main_task pid=28294)\u001b[0m filter dataset len: 1319\n", "\u001b[36m(main_task pid=28294)\u001b[0m Size of train dataloader: 29\n", "\u001b[36m(main_task pid=28294)\u001b[0m Size of val dataloader: 1\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Critic overriding config {'bos_token_id': None, 'eos_token_id': 151645, 'pad_token_id': 151643}\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained(\"openai/whisper-tiny\", attn_implementation=\"flash_attention_2\", torch_dtype=torch.float16)`\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Qwen2ForCausalLM contains 494.03M parameters\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Before critic FSDP, memory allocated (GB): 0.0, memory reserved (GB): 0.0\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:440: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.FULL_SHARD since the world size is 1.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m NCCL version 2.20.5+cuda12.4\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m ip-10-192-12-228:28545:28765 [0] nccl_net_ofi_init:1472 NCCL WARN NET/OFI aws-ofi-nccl initialization failed\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m After critic FSDP, memory allocated (GB): 1.8410954475402832, memory reserved (GB): 2.95703125\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Total steps: 435, num_warmup_steps: 0\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Model config after override: Qwen2Config {\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"_name_or_path\": \"/teamspace/studios/this_studio/models/Qwen2.5-0.5B-Instruct\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"architectures\": [\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"Qwen2ForCausalLM\"\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m ],\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"attention_dropout\": 0.0,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"eos_token_id\": 151645,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"hidden_act\": \"silu\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"hidden_size\": 896,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"initializer_range\": 0.02,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"intermediate_size\": 4864,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"max_position_embeddings\": 32768,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"max_window_layers\": 21,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"model_type\": \"qwen2\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"num_attention_heads\": 14,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"num_hidden_layers\": 24,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"num_key_value_heads\": 2,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"pad_token_id\": 151643,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"rms_norm_eps\": 1e-06,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"rope_scaling\": null,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"rope_theta\": 1000000.0,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"sliding_window\": null,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"tie_word_embeddings\": true,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"torch_dtype\": \"bfloat16\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"transformers_version\": \"4.46.3\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"use_cache\": true,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"use_sliding_window\": false,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"vocab_size\": 151936\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m }\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Qwen2ForCausalLM contains 494.03M parameters\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m wrap_policy: functools.partial(, transformer_layer_cls={})\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:440: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.FULL_SHARD since the world size is 1.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Model config after override: Qwen2Config {\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"_name_or_path\": \"/teamspace/studios/this_studio/models/Qwen2.5-0.5B-Instruct\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"architectures\": [\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"Qwen2ForCausalLM\"\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m ],\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"attention_dropout\": 0.0,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"eos_token_id\": 151645,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"hidden_act\": \"silu\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"hidden_size\": 896,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"initializer_range\": 0.02,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"intermediate_size\": 4864,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"max_position_embeddings\": 32768,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"max_window_layers\": 21,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"model_type\": \"qwen2\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"num_attention_heads\": 14,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"num_hidden_layers\": 24,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"num_key_value_heads\": 2,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"pad_token_id\": 151643,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"rms_norm_eps\": 1e-06,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"rope_scaling\": null,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"rope_theta\": 1000000.0,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"sliding_window\": null,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"tie_word_embeddings\": true,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"torch_dtype\": \"bfloat16\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"transformers_version\": \"4.46.3\",\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"use_cache\": true,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"use_sliding_window\": false,\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \"vocab_size\": 151936\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m }\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m \n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Qwen2ForCausalLM contains 494.03M parameters\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m wrap_policy: functools.partial(, transformer_layer_cls={})\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:440: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.FULL_SHARD since the world size is 1.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Total steps: 435, num_warmup_steps: 0\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Before building vllm rollout, memory allocated (GB): 4.602750778198242, memory reserved (GB): 5.78125\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m local rank 0\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m no hf weight loader need to be updated\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m before init cache memory allocated: 5.944623104GB, reserved: 6.067060736GB\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m after init cache memory allocated: 21.606612992GB, reserved: 21.770534912GB\n", "\u001b[36m(main_task pid=28294)\u001b[0m Using LocalLogger is deprecated. The constructor API will change \n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m kwargs: {'n': 1, 'logprobs': 1, 'max_tokens': 256, 'detokenize': False, 'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False}\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m After building vllm rollout, memory allocated (GB): 19.201326370239258, memory reserved (GB): 20.275390625\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m After building sharding manager, memory allocated (GB): 19.201326370239258, memory reserved (GB): 20.275390625\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:0 - critic/kl:0.000 - critic/kl_coeff:0.001 - critic/vf_loss:10.488 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.533 - critic/grad_norm:377.446 - critic/lr:0.000 - actor/entropy_loss:0.461 - actor/pg_loss:0.001 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:2.304 - actor/lr:0.000 - critic/score/mean:0.012 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.012 - critic/rewards/max:1.001 - critic/rewards/min:-0.002 - critic/advantages/mean:0.000 - critic/advantages/max:3.686 - critic/advantages/min:-4.200 - critic/returns/mean:0.010 - critic/returns/max:1.001 - critic/returns/min:-0.002 - critic/values/mean:0.527 - critic/values/max:19.625 - critic/values/min:-16.250 - response_length/mean:235.977 - response_length/max:256.000 - response_length/min:83.000 - prompt_length/mean:104.766 - prompt_length/max:183.000 - prompt_length/min:66.000 - timing/gen:24.928 - timing/ref:3.332 - timing/values:3.153 - timing/adv:0.064 - timing/update_critic:9.192 - timing/update_actor:10.928 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:1 - critic/kl:0.000 - critic/kl_coeff:0.001 - critic/vf_loss:10.038 - critic/vf_clipfrac:0.441 - critic/vpred_mean:-0.168 - critic/grad_norm:263.810 - critic/lr:0.000 - actor/entropy_loss:0.463 - actor/pg_loss:-0.002 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.290 - actor/lr:0.000 - critic/score/mean:0.000 - critic/score/max:0.000 - critic/score/min:0.000 - critic/rewards/mean:-0.000 - critic/rewards/max:0.001 - critic/rewards/min:-0.001 - critic/advantages/mean:0.000 - critic/advantages/max:3.767 - critic/advantages/min:-3.851 - critic/returns/mean:0.000 - critic/returns/max:0.002 - critic/returns/min:-0.001 - critic/values/mean:0.494 - critic/values/max:18.125 - critic/values/min:-16.750 - response_length/mean:233.168 - response_length/max:256.000 - response_length/min:103.000 - prompt_length/mean:101.836 - prompt_length/max:206.000 - prompt_length/min:72.000 - timing/gen:24.277 - timing/ref:3.390 - timing/values:3.204 - timing/adv:0.070 - timing/update_critic:9.131 - timing/update_actor:10.871 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:2 - critic/kl:-0.000 - critic/kl_coeff:0.001 - critic/vf_loss:3.138 - critic/vf_clipfrac:0.420 - critic/vpred_mean:1.064 - critic/grad_norm:237.830 - critic/lr:0.000 - actor/entropy_loss:0.442 - actor/pg_loss:0.005 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.197 - actor/lr:0.000 - critic/score/mean:0.008 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.008 - critic/rewards/max:1.001 - critic/rewards/min:-0.002 - critic/advantages/mean:0.000 - critic/advantages/max:5.567 - critic/advantages/min:-5.070 - critic/returns/mean:0.006 - critic/returns/max:1.001 - critic/returns/min:-0.002 - critic/values/mean:1.789 - critic/values/max:11.938 - critic/values/min:-9.375 - response_length/mean:234.031 - response_length/max:256.000 - response_length/min:84.000 - prompt_length/mean:104.551 - prompt_length/max:184.000 - prompt_length/min:67.000 - timing/gen:24.146 - timing/ref:3.414 - timing/values:3.167 - timing/adv:0.063 - timing/update_critic:9.121 - timing/update_actor:10.832 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:3 - critic/kl:0.000 - critic/kl_coeff:0.001 - critic/vf_loss:1.683 - critic/vf_clipfrac:0.380 - critic/vpred_mean:0.302 - critic/grad_norm:239.985 - critic/lr:0.000 - actor/entropy_loss:0.436 - actor/pg_loss:-0.001 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:2.097 - actor/lr:0.000 - critic/score/mean:0.008 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.008 - critic/rewards/max:1.000 - critic/rewards/min:-0.003 - critic/advantages/mean:-0.000 - critic/advantages/max:6.460 - critic/advantages/min:-7.391 - critic/returns/mean:0.006 - critic/returns/max:1.000 - critic/returns/min:-0.003 - critic/values/mean:1.414 - critic/values/max:11.312 - critic/values/min:-7.250 - response_length/mean:237.445 - response_length/max:256.000 - response_length/min:72.000 - prompt_length/mean:104.500 - prompt_length/max:195.000 - prompt_length/min:70.000 - timing/gen:24.064 - timing/ref:3.368 - timing/values:3.173 - timing/adv:0.066 - timing/update_critic:9.179 - timing/update_actor:11.105 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:4 - critic/kl:0.000 - critic/kl_coeff:0.001 - critic/vf_loss:0.736 - critic/vf_clipfrac:0.250 - critic/vpred_mean:0.188 - critic/grad_norm:155.038 - critic/lr:0.000 - actor/entropy_loss:0.442 - actor/pg_loss:-0.001 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:2.056 - actor/lr:0.000 - critic/score/mean:0.004 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.004 - critic/rewards/max:1.000 - critic/rewards/min:-0.002 - critic/advantages/mean:-0.000 - critic/advantages/max:6.189 - critic/advantages/min:-7.601 - critic/returns/mean:0.002 - critic/returns/max:1.000 - critic/returns/min:-0.002 - critic/values/mean:0.777 - critic/values/max:8.938 - critic/values/min:-5.875 - response_length/mean:233.352 - response_length/max:256.000 - response_length/min:46.000 - prompt_length/mean:102.660 - prompt_length/max:217.000 - prompt_length/min:66.000 - timing/gen:24.128 - timing/ref:3.418 - timing/values:3.275 - timing/adv:0.065 - timing/update_critic:9.343 - timing/update_actor:10.813 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:5 - critic/kl:0.000 - critic/kl_coeff:0.001 - critic/vf_loss:0.284 - critic/vf_clipfrac:0.156 - critic/vpred_mean:0.284 - critic/grad_norm:93.840 - critic/lr:0.000 - actor/entropy_loss:0.484 - actor/pg_loss:-0.004 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.098 - actor/lr:0.000 - critic/score/mean:0.008 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.008 - critic/rewards/max:1.000 - critic/rewards/min:-0.002 - critic/advantages/mean:0.000 - critic/advantages/max:12.584 - critic/advantages/min:-9.160 - critic/returns/mean:0.007 - critic/returns/max:1.000 - critic/returns/min:-0.002 - critic/values/mean:0.527 - critic/values/max:6.812 - critic/values/min:-8.125 - response_length/mean:235.293 - response_length/max:256.000 - response_length/min:51.000 - prompt_length/mean:105.004 - prompt_length/max:201.000 - prompt_length/min:67.000 - timing/gen:24.027 - timing/ref:3.427 - timing/values:3.193 - timing/adv:0.060 - timing/update_critic:9.194 - timing/update_actor:10.883 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:6 - critic/kl:0.001 - critic/kl_coeff:0.001 - critic/vf_loss:0.210 - critic/vf_clipfrac:0.181 - critic/vpred_mean:0.276 - critic/grad_norm:98.484 - critic/lr:0.000 - actor/entropy_loss:0.458 - actor/pg_loss:0.001 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.095 - actor/lr:0.000 - critic/score/mean:0.012 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.012 - critic/rewards/max:0.999 - critic/rewards/min:-0.002 - critic/advantages/mean:0.000 - critic/advantages/max:8.644 - critic/advantages/min:-14.228 - critic/returns/mean:0.011 - critic/returns/max:1.000 - critic/returns/min:-0.002 - critic/values/mean:0.594 - critic/values/max:7.656 - critic/values/min:-3.719 - response_length/mean:234.320 - response_length/max:256.000 - response_length/min:88.000 - prompt_length/mean:102.762 - prompt_length/max:177.000 - prompt_length/min:68.000 - timing/gen:24.469 - timing/ref:3.435 - timing/values:3.224 - timing/adv:0.062 - timing/update_critic:9.298 - timing/update_actor:10.804 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:7 - critic/kl:0.001 - critic/kl_coeff:0.001 - critic/vf_loss:0.122 - critic/vf_clipfrac:0.129 - critic/vpred_mean:0.235 - critic/grad_norm:76.795 - critic/lr:0.000 - actor/entropy_loss:0.475 - actor/pg_loss:-0.004 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:2.153 - actor/lr:0.000 - critic/score/mean:0.012 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.012 - critic/rewards/max:1.000 - critic/rewards/min:-0.002 - critic/advantages/mean:-0.000 - critic/advantages/max:7.246 - critic/advantages/min:-10.160 - critic/returns/mean:0.010 - critic/returns/max:1.001 - critic/returns/min:-0.003 - critic/values/mean:0.508 - critic/values/max:4.094 - critic/values/min:-1.828 - response_length/mean:230.289 - response_length/max:256.000 - response_length/min:66.000 - prompt_length/mean:100.691 - prompt_length/max:186.000 - prompt_length/min:56.000 - timing/gen:23.792 - timing/ref:3.449 - timing/values:3.217 - timing/adv:0.064 - timing/update_critic:9.305 - timing/update_actor:10.897 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:8 - critic/kl:0.001 - critic/kl_coeff:0.001 - critic/vf_loss:0.078 - critic/vf_clipfrac:0.038 - critic/vpred_mean:-0.035 - critic/grad_norm:73.372 - critic/lr:0.000 - actor/entropy_loss:0.459 - actor/pg_loss:-0.003 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:2.053 - actor/lr:0.000 - critic/score/mean:0.012 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.011 - critic/rewards/max:1.000 - critic/rewards/min:-0.002 - critic/advantages/mean:0.000 - critic/advantages/max:9.498 - critic/advantages/min:-8.295 - critic/returns/mean:0.009 - critic/returns/max:1.001 - critic/returns/min:-0.003 - critic/values/mean:0.297 - critic/values/max:2.438 - critic/values/min:-1.625 - response_length/mean:235.316 - response_length/max:256.000 - response_length/min:93.000 - prompt_length/mean:103.734 - prompt_length/max:174.000 - prompt_length/min:66.000 - timing/gen:24.379 - timing/ref:3.439 - timing/values:3.213 - timing/adv:0.063 - timing/update_critic:9.164 - timing/update_actor:10.861 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m validation generation end\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>system\n", "\u001b[36m(main_task pid=28294)\u001b[0m You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>user\n", "\u001b[36m(main_task pid=28294)\u001b[0m Frances sells 20 cupcakes for $2 for each cupcake and 40 cookies at $1 each. She buys five trays at $4 for each tray. How much money does Frances have left? Let's think step by step and output the final answer after \"####\".<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>assistant\n", "\u001b[36m(main_task pid=28294)\u001b[0m To determine how much money Frances has left after selling and buying items, we need to follow these steps:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 1. Calculate the total revenue from selling cupcakes.\n", "\u001b[36m(main_task pid=28294)\u001b[0m 2. Calculate the total revenue from selling cookies.\n", "\u001b[36m(main_task pid=28294)\u001b[0m 3. Calculate the total revenue from buying the trays.\n", "\u001b[36m(main_task pid=28294)\u001b[0m 4. Sum up all the revenues to get the total revenue.\n", "\u001b[36m(main_task pid=28294)\u001b[0m 5. Subtract the total revenue from the initial amount of money Frances had to find out how much money she has left.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m Let's start with the first step:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 1. **Revenue from selling cupcakes:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Frances sells 20 cupcakes at $2 each.\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Total revenue from cupcakes = 20 cupcakes * $2/cupcake = $40.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 2. **Revenue from selling cookies:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Frances sells 40 cookies at $1 each.\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Total revenue from cookies = 40 cookies * $1/cookie = $40.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 3. **Revenue from buying the trays:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Frances buys 5 trays at $4 each.\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Total revenue from buying trays = 5 trays * $4/tray = $20.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 4. **Total revenue:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Total revenue = Revenue from cupcakes + Revenue from cookies + Revenue from buying trays\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Total revenue\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:9 - critic/kl:0.002 - critic/kl_coeff:0.001 - critic/vf_loss:0.041 - critic/vf_clipfrac:0.008 - critic/vpred_mean:-0.073 - critic/grad_norm:38.623 - critic/lr:0.000 - actor/entropy_loss:0.480 - actor/pg_loss:0.001 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.148 - actor/lr:0.000 - val/test_score/openai/gsm8k:0.004 - critic/score/mean:0.012 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.011 - critic/rewards/max:1.000 - critic/rewards/min:-0.004 - critic/advantages/mean:0.000 - critic/advantages/max:7.622 - critic/advantages/min:-10.036 - critic/returns/mean:0.009 - critic/returns/max:1.000 - critic/returns/min:-0.004 - critic/values/mean:0.080 - critic/values/max:2.609 - critic/values/min:-1.609 - response_length/mean:228.992 - response_length/max:256.000 - response_length/min:60.000 - prompt_length/mean:100.484 - prompt_length/max:183.000 - prompt_length/min:65.000 - timing/gen:23.970 - timing/ref:3.402 - timing/values:3.268 - timing/adv:0.063 - timing/update_critic:9.110 - timing/update_actor:10.833 - timing/testing:59.966 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving actor checkpoint to checkpoints/verl_examples/gsm8k/actor/global_step_9\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving critic checkpoint to checkpoints/verl_examples/gsm8k/critic/global_step_9\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:10 - critic/kl:0.003 - critic/kl_coeff:0.001 - critic/vf_loss:0.041 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.098 - critic/grad_norm:37.952 - critic/lr:0.000 - actor/entropy_loss:0.498 - actor/pg_loss:-0.000 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:2.188 - actor/lr:0.000 - critic/score/mean:0.020 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.019 - critic/rewards/max:1.000 - critic/rewards/min:-0.004 - critic/advantages/mean:0.000 - critic/advantages/max:8.178 - critic/advantages/min:-9.838 - critic/returns/mean:0.018 - critic/returns/max:1.001 - critic/returns/min:-0.004 - critic/values/mean:0.226 - critic/values/max:2.391 - critic/values/min:-1.391 - response_length/mean:224.570 - response_length/max:256.000 - response_length/min:36.000 - prompt_length/mean:100.480 - prompt_length/max:167.000 - prompt_length/min:68.000 - timing/gen:25.188 - timing/ref:3.398 - timing/values:3.243 - timing/adv:0.059 - timing/update_critic:9.055 - timing/update_actor:10.881 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:11 - critic/kl:0.003 - critic/kl_coeff:0.001 - critic/vf_loss:0.041 - critic/vf_clipfrac:0.005 - critic/vpred_mean:0.097 - critic/grad_norm:43.225 - critic/lr:0.000 - actor/entropy_loss:0.479 - actor/pg_loss:-0.008 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.087 - actor/lr:0.000 - critic/score/mean:0.023 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.023 - critic/rewards/max:1.001 - critic/rewards/min:-0.005 - critic/advantages/mean:-0.000 - critic/advantages/max:9.081 - critic/advantages/min:-6.945 - critic/returns/mean:0.023 - critic/returns/max:1.001 - critic/returns/min:-0.005 - critic/values/mean:0.291 - critic/values/max:1.672 - critic/values/min:-1.562 - response_length/mean:231.227 - response_length/max:256.000 - response_length/min:69.000 - prompt_length/mean:103.645 - prompt_length/max:184.000 - prompt_length/min:70.000 - timing/gen:24.139 - timing/ref:3.439 - timing/values:3.204 - timing/adv:0.060 - timing/update_critic:9.110 - timing/update_actor:10.813 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:12 - critic/kl:0.003 - critic/kl_coeff:0.001 - critic/vf_loss:0.031 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.093 - critic/grad_norm:31.382 - critic/lr:0.000 - actor/entropy_loss:0.466 - actor/pg_loss:-0.002 - actor/pg_clipfrac:0.001 - actor/ppo_kl:-0.000 - actor/grad_norm:2.228 - actor/lr:0.000 - critic/score/mean:0.023 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.023 - critic/rewards/max:1.001 - critic/rewards/min:-0.005 - critic/advantages/mean:0.000 - critic/advantages/max:6.806 - critic/advantages/min:-6.111 - critic/returns/mean:0.018 - critic/returns/max:1.002 - critic/returns/min:-0.006 - critic/values/mean:0.242 - critic/values/max:1.484 - critic/values/min:-0.688 - response_length/mean:229.203 - response_length/max:256.000 - response_length/min:86.000 - prompt_length/mean:104.930 - prompt_length/max:256.000 - prompt_length/min:62.000 - timing/gen:24.259 - timing/ref:3.418 - timing/values:3.193 - timing/adv:0.060 - timing/update_critic:9.209 - timing/update_actor:10.883 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:13 - critic/kl:0.004 - critic/kl_coeff:0.001 - critic/vf_loss:0.030 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.150 - critic/grad_norm:34.615 - critic/lr:0.000 - actor/entropy_loss:0.484 - actor/pg_loss:0.006 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:2.046 - actor/lr:0.000 - critic/score/mean:0.008 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.007 - critic/rewards/max:0.998 - critic/rewards/min:-0.006 - critic/advantages/mean:0.000 - critic/advantages/max:10.923 - critic/advantages/min:-5.137 - critic/returns/mean:0.008 - critic/returns/max:1.000 - critic/returns/min:-0.007 - critic/values/mean:0.283 - critic/values/max:0.961 - critic/values/min:-0.660 - response_length/mean:232.605 - response_length/max:256.000 - response_length/min:94.000 - prompt_length/mean:101.812 - prompt_length/max:171.000 - prompt_length/min:69.000 - timing/gen:24.266 - timing/ref:3.374 - timing/values:3.190 - timing/adv:0.059 - timing/update_critic:9.099 - timing/update_actor:10.888 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:14 - critic/kl:0.005 - critic/kl_coeff:0.001 - critic/vf_loss:0.039 - critic/vf_clipfrac:0.005 - critic/vpred_mean:-0.000 - critic/grad_norm:43.469 - critic/lr:0.000 - actor/entropy_loss:0.496 - actor/pg_loss:0.002 - actor/pg_clipfrac:0.002 - actor/ppo_kl:-0.000 - actor/grad_norm:2.275 - actor/lr:0.000 - critic/score/mean:0.031 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.030 - critic/rewards/max:0.999 - critic/rewards/min:-0.006 - critic/advantages/mean:0.000 - critic/advantages/max:6.506 - critic/advantages/min:-3.311 - critic/returns/mean:0.028 - critic/returns/max:1.001 - critic/returns/min:-0.007 - critic/values/mean:-0.228 - critic/values/max:0.363 - critic/values/min:-1.023 - response_length/mean:226.652 - response_length/max:256.000 - response_length/min:16.000 - prompt_length/mean:102.953 - prompt_length/max:170.000 - prompt_length/min:69.000 - timing/gen:24.303 - timing/ref:3.398 - timing/values:3.207 - timing/adv:0.060 - timing/update_critic:9.210 - timing/update_actor:10.870 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:15 - critic/kl:0.007 - critic/kl_coeff:0.001 - critic/vf_loss:0.040 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.122 - critic/grad_norm:29.667 - critic/lr:0.000 - actor/entropy_loss:0.509 - actor/pg_loss:-0.003 - actor/pg_clipfrac:0.002 - actor/ppo_kl:-0.000 - actor/grad_norm:2.313 - actor/lr:0.000 - critic/score/mean:0.051 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.049 - critic/rewards/max:1.000 - critic/rewards/min:-0.008 - critic/advantages/mean:0.000 - critic/advantages/max:5.265 - critic/advantages/min:-3.315 - critic/returns/mean:0.043 - critic/returns/max:1.001 - critic/returns/min:-0.008 - critic/values/mean:-0.033 - critic/values/max:0.773 - critic/values/min:-0.602 - response_length/mean:222.965 - response_length/max:256.000 - response_length/min:45.000 - prompt_length/mean:107.332 - prompt_length/max:222.000 - prompt_length/min:68.000 - timing/gen:23.736 - timing/ref:3.414 - timing/values:3.171 - timing/adv:0.058 - timing/update_critic:9.015 - timing/update_actor:10.800 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:16 - critic/kl:0.012 - critic/kl_coeff:0.001 - critic/vf_loss:0.033 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.039 - critic/grad_norm:30.166 - critic/lr:0.000 - actor/entropy_loss:0.519 - actor/pg_loss:-0.000 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.363 - actor/lr:0.000 - critic/score/mean:0.039 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.037 - critic/rewards/max:1.000 - critic/rewards/min:-0.012 - critic/advantages/mean:-0.000 - critic/advantages/max:6.580 - critic/advantages/min:-3.047 - critic/returns/mean:0.030 - critic/returns/max:1.001 - critic/returns/min:-0.012 - critic/values/mean:-0.096 - critic/values/max:0.475 - critic/values/min:-0.582 - response_length/mean:224.695 - response_length/max:256.000 - response_length/min:17.000 - prompt_length/mean:103.379 - prompt_length/max:190.000 - prompt_length/min:68.000 - timing/gen:23.597 - timing/ref:3.361 - timing/values:3.187 - timing/adv:0.059 - timing/update_critic:8.919 - timing/update_actor:10.995 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:17 - critic/kl:0.016 - critic/kl_coeff:0.001 - critic/vf_loss:0.064 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.040 - critic/grad_norm:33.541 - critic/lr:0.000 - actor/entropy_loss:0.476 - actor/pg_loss:-0.009 - actor/pg_clipfrac:0.002 - actor/ppo_kl:-0.000 - actor/grad_norm:2.292 - actor/lr:0.000 - critic/score/mean:0.094 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.091 - critic/rewards/max:0.999 - critic/rewards/min:-0.011 - critic/advantages/mean:-0.000 - critic/advantages/max:4.398 - critic/advantages/min:-1.815 - critic/returns/mean:0.081 - critic/returns/max:1.002 - critic/returns/min:-0.011 - critic/values/mean:-0.122 - critic/values/max:0.375 - critic/values/min:-0.832 - response_length/mean:222.750 - response_length/max:256.000 - response_length/min:14.000 - prompt_length/mean:104.809 - prompt_length/max:186.000 - prompt_length/min:69.000 - timing/gen:23.406 - timing/ref:3.361 - timing/values:3.109 - timing/adv:0.060 - timing/update_critic:9.127 - timing/update_actor:10.768 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:18 - critic/kl:0.020 - critic/kl_coeff:0.001 - critic/vf_loss:0.057 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.080 - critic/grad_norm:32.595 - critic/lr:0.000 - actor/entropy_loss:0.496 - actor/pg_loss:-0.011 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.423 - actor/lr:0.000 - critic/score/mean:0.102 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.098 - critic/rewards/max:0.999 - critic/rewards/min:-0.015 - critic/advantages/mean:0.000 - critic/advantages/max:4.333 - critic/advantages/min:-2.007 - critic/returns/mean:0.084 - critic/returns/max:1.001 - critic/returns/min:-0.015 - critic/values/mean:-0.081 - critic/values/max:0.438 - critic/values/min:-0.605 - response_length/mean:220.078 - response_length/max:256.000 - response_length/min:29.000 - prompt_length/mean:105.336 - prompt_length/max:176.000 - prompt_length/min:69.000 - timing/gen:23.512 - timing/ref:3.375 - timing/values:3.197 - timing/adv:0.059 - timing/update_critic:9.130 - timing/update_actor:10.796 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m validation generation end\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>system\n", "\u001b[36m(main_task pid=28294)\u001b[0m You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>user\n", "\u001b[36m(main_task pid=28294)\u001b[0m A loaf of bread at the bakery costs $2. Bagels cost $1 each. How much more do 3 loaves of bread cost than 2 bagels? Let's think step by step and output the final answer after \"####\".<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>assistant\n", "\u001b[36m(main_task pid=28294)\u001b[0m To solve this problem, we need to calculate the total cost of 3 loaves of bread and 2 bagels, and then find the difference between these two costs.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 1. **Calculate the cost of 3 loaves of bread:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Each loaf of bread costs $2.\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Therefore, 3 loaves of bread cost \\( 3 \\times 2 = \\$6 \\).\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 2. **Calculate the cost of 2 bagels:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Each bagel costs $1.\n", "\u001b[36m(main_task pid=28294)\u001b[0m - Therefore, 2 bagels cost \\( 2 \\times 1 = \\$2 \\).\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 3. **Find the difference between the cost of 3 loaves of bread and 2 bagels:**\n", "\u001b[36m(main_task pid=28294)\u001b[0m - The difference is \\( 6 - 2 = \\$4 \\).\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m #### Final Answer: #### $4<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:19 - critic/kl:0.027 - critic/kl_coeff:0.001 - critic/vf_loss:0.055 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.112 - critic/grad_norm:26.858 - critic/lr:0.000 - actor/entropy_loss:0.449 - actor/pg_loss:-0.008 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.423 - actor/lr:0.000 - val/test_score/openai/gsm8k:0.187 - critic/score/mean:0.113 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.109 - critic/rewards/max:0.999 - critic/rewards/min:-0.016 - critic/advantages/mean:0.000 - critic/advantages/max:4.134 - critic/advantages/min:-1.849 - critic/returns/mean:0.101 - critic/returns/max:1.003 - critic/returns/min:-0.016 - critic/values/mean:-0.035 - critic/values/max:0.436 - critic/values/min:-0.504 - response_length/mean:221.965 - response_length/max:256.000 - response_length/min:6.000 - prompt_length/mean:103.352 - prompt_length/max:189.000 - prompt_length/min:72.000 - timing/gen:23.618 - timing/ref:3.393 - timing/values:3.151 - timing/adv:0.060 - timing/update_critic:9.072 - timing/update_actor:10.790 - timing/testing:57.314 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving actor checkpoint to checkpoints/verl_examples/gsm8k/actor/global_step_19\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving critic checkpoint to checkpoints/verl_examples/gsm8k/critic/global_step_19\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:20 - critic/kl:0.024 - critic/kl_coeff:0.001 - critic/vf_loss:0.067 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.128 - critic/grad_norm:30.260 - critic/lr:0.000 - actor/entropy_loss:0.471 - actor/pg_loss:-0.015 - actor/pg_clipfrac:0.001 - actor/ppo_kl:0.000 - actor/grad_norm:3.157 - actor/lr:0.000 - critic/score/mean:0.152 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.148 - critic/rewards/max:0.999 - critic/rewards/min:-0.016 - critic/advantages/mean:-0.000 - critic/advantages/max:3.422 - critic/advantages/min:-4.571 - critic/returns/mean:0.129 - critic/returns/max:1.006 - critic/returns/min:-0.017 - critic/values/mean:-0.009 - critic/values/max:1.398 - critic/values/min:-0.656 - response_length/mean:219.305 - response_length/max:256.000 - response_length/min:21.000 - prompt_length/mean:104.742 - prompt_length/max:199.000 - prompt_length/min:65.000 - timing/gen:24.937 - timing/ref:3.418 - timing/values:3.153 - timing/adv:0.059 - timing/update_critic:9.206 - timing/update_actor:10.821 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:21 - critic/kl:0.028 - critic/kl_coeff:0.001 - critic/vf_loss:0.067 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.147 - critic/grad_norm:23.865 - critic/lr:0.000 - actor/entropy_loss:0.431 - actor/pg_loss:-0.014 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.468 - actor/lr:0.000 - critic/score/mean:0.164 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.159 - critic/rewards/max:0.999 - critic/rewards/min:-0.017 - critic/advantages/mean:-0.000 - critic/advantages/max:3.344 - critic/advantages/min:-1.174 - critic/returns/mean:0.139 - critic/returns/max:1.002 - critic/returns/min:-0.017 - critic/values/mean:0.021 - critic/values/max:0.305 - critic/values/min:-0.371 - response_length/mean:218.129 - response_length/max:256.000 - response_length/min:61.000 - prompt_length/mean:103.434 - prompt_length/max:216.000 - prompt_length/min:68.000 - timing/gen:23.752 - timing/ref:3.414 - timing/values:3.145 - timing/adv:0.060 - timing/update_critic:9.022 - timing/update_actor:10.779 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:22 - critic/kl:0.029 - critic/kl_coeff:0.001 - critic/vf_loss:0.077 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.160 - critic/grad_norm:31.984 - critic/lr:0.000 - actor/entropy_loss:0.427 - actor/pg_loss:-0.011 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.264 - actor/lr:0.000 - critic/score/mean:0.199 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.193 - critic/rewards/max:0.999 - critic/rewards/min:-0.023 - critic/advantages/mean:-0.000 - critic/advantages/max:2.826 - critic/advantages/min:-1.320 - critic/returns/mean:0.161 - critic/returns/max:1.001 - critic/returns/min:-0.023 - critic/values/mean:0.038 - critic/values/max:0.354 - critic/values/min:-0.379 - response_length/mean:221.094 - response_length/max:256.000 - response_length/min:62.000 - prompt_length/mean:105.645 - prompt_length/max:215.000 - prompt_length/min:65.000 - timing/gen:23.868 - timing/ref:3.403 - timing/values:3.179 - timing/adv:0.060 - timing/update_critic:8.959 - timing/update_actor:10.798 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:23 - critic/kl:0.037 - critic/kl_coeff:0.001 - critic/vf_loss:0.092 - critic/vf_clipfrac:0.001 - critic/vpred_mean:0.196 - critic/grad_norm:30.678 - critic/lr:0.000 - actor/entropy_loss:0.415 - actor/pg_loss:-0.028 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.001 - actor/grad_norm:48.414 - actor/lr:0.000 - critic/score/mean:0.227 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.219 - critic/rewards/max:0.999 - critic/rewards/min:-0.138 - critic/advantages/mean:0.000 - critic/advantages/max:2.562 - critic/advantages/min:-9.078 - critic/returns/mean:0.185 - critic/returns/max:1.002 - critic/returns/min:-0.139 - critic/values/mean:0.071 - critic/values/max:3.969 - critic/values/min:-0.338 - response_length/mean:217.648 - response_length/max:256.000 - response_length/min:6.000 - prompt_length/mean:101.512 - prompt_length/max:191.000 - prompt_length/min:67.000 - timing/gen:23.482 - timing/ref:3.394 - timing/values:3.164 - timing/adv:0.059 - timing/update_critic:8.965 - timing/update_actor:10.772 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:24 - critic/kl:0.038 - critic/kl_coeff:0.001 - critic/vf_loss:0.095 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.285 - critic/grad_norm:35.689 - critic/lr:0.000 - actor/entropy_loss:0.427 - actor/pg_loss:-0.018 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.676 - actor/lr:0.000 - critic/score/mean:0.246 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.239 - critic/rewards/max:1.000 - critic/rewards/min:-0.024 - critic/advantages/mean:-0.000 - critic/advantages/max:2.936 - critic/advantages/min:-2.398 - critic/returns/mean:0.208 - critic/returns/max:1.002 - critic/returns/min:-0.027 - critic/values/mean:0.135 - critic/values/max:1.875 - critic/values/min:-0.235 - response_length/mean:215.758 - response_length/max:256.000 - response_length/min:81.000 - prompt_length/mean:106.418 - prompt_length/max:183.000 - prompt_length/min:64.000 - timing/gen:23.394 - timing/ref:3.387 - timing/values:3.190 - timing/adv:0.059 - timing/update_critic:8.992 - timing/update_actor:10.799 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:25 - critic/kl:0.008 - critic/kl_coeff:0.001 - critic/vf_loss:0.094 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.221 - critic/grad_norm:30.657 - critic/lr:0.000 - actor/entropy_loss:0.414 - actor/pg_loss:-0.004 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.003 - actor/grad_norm:34.984 - actor/lr:0.000 - critic/score/mean:0.250 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.251 - critic/rewards/max:2.024 - critic/rewards/min:-0.022 - critic/advantages/mean:0.000 - critic/advantages/max:5.709 - critic/advantages/min:-1.715 - critic/returns/mean:0.216 - critic/returns/max:2.050 - critic/returns/min:-0.022 - critic/values/mean:0.227 - critic/values/max:0.703 - critic/values/min:-1.055 - response_length/mean:210.586 - response_length/max:256.000 - response_length/min:78.000 - prompt_length/mean:103.406 - prompt_length/max:232.000 - prompt_length/min:64.000 - timing/gen:23.530 - timing/ref:3.407 - timing/values:3.228 - timing/adv:0.058 - timing/update_critic:8.998 - timing/update_actor:10.763 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:26 - critic/kl:0.047 - critic/kl_coeff:0.001 - critic/vf_loss:0.144 - critic/vf_clipfrac:0.031 - critic/vpred_mean:0.072 - critic/grad_norm:61.861 - critic/lr:0.000 - actor/entropy_loss:0.382 - actor/pg_loss:-0.016 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.559 - actor/lr:0.000 - critic/score/mean:0.285 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.277 - critic/rewards/max:0.998 - critic/rewards/min:-0.027 - critic/advantages/mean:-0.000 - critic/advantages/max:2.520 - critic/advantages/min:-1.550 - critic/returns/mean:0.250 - critic/returns/max:1.002 - critic/returns/min:-0.027 - critic/values/mean:0.465 - critic/values/max:0.863 - critic/values/min:0.074 - response_length/mean:207.973 - response_length/max:256.000 - response_length/min:24.000 - prompt_length/mean:103.566 - prompt_length/max:194.000 - prompt_length/min:66.000 - timing/gen:23.017 - timing/ref:3.432 - timing/values:3.209 - timing/adv:0.059 - timing/update_critic:9.104 - timing/update_actor:10.786 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:27 - critic/kl:0.048 - critic/kl_coeff:0.001 - critic/vf_loss:0.216 - critic/vf_clipfrac:0.175 - critic/vpred_mean:0.408 - critic/grad_norm:89.819 - critic/lr:0.000 - actor/entropy_loss:0.340 - actor/pg_loss:-0.018 - actor/pg_clipfrac:0.003 - actor/ppo_kl:-0.000 - actor/grad_norm:2.722 - actor/lr:0.000 - critic/score/mean:0.371 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.362 - critic/rewards/max:0.999 - critic/rewards/min:-0.025 - critic/advantages/mean:-0.000 - critic/advantages/max:2.400 - critic/advantages/min:-1.734 - critic/returns/mean:0.312 - critic/returns/max:1.002 - critic/returns/min:-0.026 - critic/values/mean:-0.169 - critic/values/max:0.279 - critic/values/min:-0.660 - response_length/mean:205.676 - response_length/max:256.000 - response_length/min:13.000 - prompt_length/mean:105.074 - prompt_length/max:238.000 - prompt_length/min:66.000 - timing/gen:23.320 - timing/ref:3.407 - timing/values:3.186 - timing/adv:0.058 - timing/update_critic:9.084 - timing/update_actor:10.797 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:28 - critic/kl:0.053 - critic/kl_coeff:0.001 - critic/vf_loss:0.186 - critic/vf_clipfrac:0.369 - critic/vpred_mean:0.509 - critic/grad_norm:81.161 - critic/lr:0.000 - actor/entropy_loss:0.373 - actor/pg_loss:-0.014 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.593 - actor/lr:0.000 - critic/score/mean:0.293 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.284 - critic/rewards/max:0.999 - critic/rewards/min:-0.024 - critic/advantages/mean:0.000 - critic/advantages/max:2.735 - critic/advantages/min:-1.528 - critic/returns/mean:0.238 - critic/returns/max:1.005 - critic/returns/min:-0.024 - critic/values/mean:0.910 - critic/values/max:1.336 - critic/values/min:0.416 - response_length/mean:202.500 - response_length/max:256.000 - response_length/min:67.000 - prompt_length/mean:103.758 - prompt_length/max:201.000 - prompt_length/min:70.000 - timing/gen:23.026 - timing/ref:3.427 - timing/values:3.172 - timing/adv:0.058 - timing/update_critic:9.130 - timing/update_actor:10.794 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m validation generation end\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>system\n", "\u001b[36m(main_task pid=28294)\u001b[0m You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>user\n", "\u001b[36m(main_task pid=28294)\u001b[0m Kelian has two recipes for preparing dishes, one having 20 instructions and the second one having twice as many instructions as the first one. How many instructions does Kelian have to read to prepare the two dishes? Let's think step by step and output the final answer after \"####\".<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>assistant\n", "\u001b[36m(main_task pid=28294)\u001b[0m To determine the total number of instructions Kelian needs to read to prepare both dishes, we can follow these steps:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 1. Identify the number of instructions in the first recipe.\n", "\u001b[36m(main_task pid=28294)\u001b[0m 2. Calculate the number of instructions in the second recipe.\n", "\u001b[36m(main_task pid=28294)\u001b[0m 3. Add the number of instructions from both recipes together.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m Step 1: The first recipe has 20 instructions.\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m Step 2: The second recipe has twice as many instructions as the first one. Therefore, the number of instructions in the second recipe is:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ 2 \\times 20 = 40 \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m Step 3: Add the number of instructions from both recipes together:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ 20 + 40 = 60 \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m So, the total number of instructions Kelian needs to read to prepare both dishes is:\n", "\u001b[36m(main_task pid=28294)\u001b[0m #### 60\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m Therefore, the final answer is:\n", "\u001b[36m(main_task pid=28294)\u001b[0m #### 60<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:29 - critic/kl:0.059 - critic/kl_coeff:0.001 - critic/vf_loss:0.106 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.381 - critic/grad_norm:30.421 - critic/lr:0.000 - actor/entropy_loss:0.343 - actor/pg_loss:-0.013 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.577 - actor/lr:0.000 - val/test_score/openai/gsm8k:0.344 - critic/score/mean:0.344 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.333 - critic/rewards/max:0.998 - critic/rewards/min:-0.026 - critic/advantages/mean:0.000 - critic/advantages/max:2.961 - critic/advantages/min:-1.600 - critic/returns/mean:0.290 - critic/returns/max:1.005 - critic/returns/min:-0.026 - critic/values/mean:0.208 - critic/values/max:0.637 - critic/values/min:-0.371 - response_length/mean:202.914 - response_length/max:256.000 - response_length/min:60.000 - prompt_length/mean:103.797 - prompt_length/max:160.000 - prompt_length/min:67.000 - timing/gen:23.198 - timing/ref:3.438 - timing/values:3.225 - timing/adv:0.058 - timing/update_critic:9.093 - timing/update_actor:10.812 - timing/testing:53.515 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving actor checkpoint to checkpoints/verl_examples/gsm8k/actor/global_step_29\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving critic checkpoint to checkpoints/verl_examples/gsm8k/critic/global_step_29\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:30 - critic/kl:0.071 - critic/kl_coeff:0.001 - critic/vf_loss:0.141 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.063 - critic/grad_norm:54.723 - critic/lr:0.000 - actor/entropy_loss:0.323 - actor/pg_loss:-0.016 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:2.743 - actor/lr:0.000 - critic/score/mean:0.367 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.356 - critic/rewards/max:0.996 - critic/rewards/min:-0.029 - critic/advantages/mean:-0.000 - critic/advantages/max:2.632 - critic/advantages/min:-1.622 - critic/returns/mean:0.305 - critic/returns/max:1.003 - critic/returns/min:-0.029 - critic/values/mean:0.019 - critic/values/max:0.445 - critic/values/min:-0.477 - response_length/mean:190.918 - response_length/max:256.000 - response_length/min:6.000 - prompt_length/mean:102.129 - prompt_length/max:186.000 - prompt_length/min:67.000 - timing/gen:23.954 - timing/ref:3.413 - timing/values:3.233 - timing/adv:0.057 - timing/update_critic:9.046 - timing/update_actor:10.778 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:31 - critic/kl:0.067 - critic/kl_coeff:0.001 - critic/vf_loss:0.103 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.375 - critic/grad_norm:26.295 - critic/lr:0.000 - actor/entropy_loss:0.335 - actor/pg_loss:-0.007 - actor/pg_clipfrac:0.003 - actor/ppo_kl:-0.000 - actor/grad_norm:4.026 - actor/lr:0.000 - critic/score/mean:0.371 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.361 - critic/rewards/max:0.997 - critic/rewards/min:-0.032 - critic/advantages/mean:-0.000 - critic/advantages/max:2.539 - critic/advantages/min:-1.678 - critic/returns/mean:0.305 - critic/returns/max:1.002 - critic/returns/min:-0.032 - critic/values/mean:0.350 - critic/values/max:0.805 - critic/values/min:-0.217 - response_length/mean:186.094 - response_length/max:256.000 - response_length/min:59.000 - prompt_length/mean:104.117 - prompt_length/max:191.000 - prompt_length/min:71.000 - timing/gen:22.221 - timing/ref:3.396 - timing/values:3.198 - timing/adv:0.058 - timing/update_critic:9.063 - timing/update_actor:10.775 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:32 - critic/kl:0.081 - critic/kl_coeff:0.001 - critic/vf_loss:0.140 - critic/vf_clipfrac:0.203 - critic/vpred_mean:0.298 - critic/grad_norm:57.234 - critic/lr:0.000 - actor/entropy_loss:0.308 - actor/pg_loss:0.006 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:2.751 - actor/lr:0.000 - critic/score/mean:0.391 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.378 - critic/rewards/max:0.997 - critic/rewards/min:-0.028 - critic/advantages/mean:0.000 - critic/advantages/max:2.417 - critic/advantages/min:-1.948 - critic/returns/mean:0.340 - critic/returns/max:1.002 - critic/returns/min:-0.028 - critic/values/mean:0.660 - critic/values/max:1.195 - critic/values/min:0.005 - response_length/mean:181.852 - response_length/max:256.000 - response_length/min:56.000 - prompt_length/mean:103.426 - prompt_length/max:222.000 - prompt_length/min:62.000 - timing/gen:22.361 - timing/ref:3.393 - timing/values:3.252 - timing/adv:0.057 - timing/update_critic:9.115 - timing/update_actor:10.793 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:33 - critic/kl:0.079 - critic/kl_coeff:0.001 - critic/vf_loss:0.118 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.325 - critic/grad_norm:35.494 - critic/lr:0.000 - actor/entropy_loss:0.309 - actor/pg_loss:-0.006 - actor/pg_clipfrac:0.003 - actor/ppo_kl:-0.000 - actor/grad_norm:3.946 - actor/lr:0.000 - critic/score/mean:0.383 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.370 - critic/rewards/max:1.076 - critic/rewards/min:-0.030 - critic/advantages/mean:0.000 - critic/advantages/max:2.411 - critic/advantages/min:-2.163 - critic/returns/mean:0.324 - critic/returns/max:1.116 - critic/returns/min:-0.030 - critic/values/mean:0.056 - critic/values/max:0.762 - critic/values/min:-0.660 - response_length/mean:188.148 - response_length/max:256.000 - response_length/min:53.000 - prompt_length/mean:103.168 - prompt_length/max:177.000 - prompt_length/min:69.000 - timing/gen:22.639 - timing/ref:3.408 - timing/values:3.211 - timing/adv:0.058 - timing/update_critic:9.104 - timing/update_actor:10.776 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:34 - critic/kl:0.079 - critic/kl_coeff:0.001 - critic/vf_loss:0.152 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.070 - critic/grad_norm:64.022 - critic/lr:0.000 - actor/entropy_loss:0.286 - actor/pg_loss:-0.020 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.678 - actor/lr:0.000 - critic/score/mean:0.426 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.413 - critic/rewards/max:0.996 - critic/rewards/min:-0.035 - critic/advantages/mean:0.000 - critic/advantages/max:2.692 - critic/advantages/min:-2.171 - critic/returns/mean:0.350 - critic/returns/max:1.001 - critic/returns/min:-0.035 - critic/values/mean:-0.025 - critic/values/max:0.609 - critic/values/min:-0.793 - response_length/mean:185.578 - response_length/max:256.000 - response_length/min:55.000 - prompt_length/mean:100.945 - prompt_length/max:182.000 - prompt_length/min:67.000 - timing/gen:22.709 - timing/ref:3.445 - timing/values:3.195 - timing/adv:0.057 - timing/update_critic:9.043 - timing/update_actor:10.788 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:35 - critic/kl:0.085 - critic/kl_coeff:0.001 - critic/vf_loss:0.099 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.400 - critic/grad_norm:17.049 - critic/lr:0.000 - actor/entropy_loss:0.304 - actor/pg_loss:-0.003 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.894 - actor/lr:0.000 - critic/score/mean:0.445 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.432 - critic/rewards/max:0.998 - critic/rewards/min:-0.029 - critic/advantages/mean:-0.000 - critic/advantages/max:2.507 - critic/advantages/min:-2.582 - critic/returns/mean:0.389 - critic/returns/max:1.007 - critic/returns/min:-0.030 - critic/values/mean:0.469 - critic/values/max:1.203 - critic/values/min:-0.277 - response_length/mean:182.477 - response_length/max:256.000 - response_length/min:58.000 - prompt_length/mean:105.250 - prompt_length/max:256.000 - prompt_length/min:56.000 - timing/gen:22.272 - timing/ref:3.390 - timing/values:3.223 - timing/adv:0.058 - timing/update_critic:9.046 - timing/update_actor:10.785 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:36 - critic/kl:0.086 - critic/kl_coeff:0.001 - critic/vf_loss:0.094 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.395 - critic/grad_norm:14.511 - critic/lr:0.000 - actor/entropy_loss:0.256 - actor/pg_loss:-0.045 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.899 - actor/lr:0.000 - critic/score/mean:0.473 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.459 - critic/rewards/max:1.001 - critic/rewards/min:-0.026 - critic/advantages/mean:-0.000 - critic/advantages/max:2.743 - critic/advantages/min:-2.513 - critic/returns/mean:0.385 - critic/returns/max:1.001 - critic/returns/min:-0.026 - critic/values/mean:0.459 - critic/values/max:1.219 - critic/values/min:-0.252 - response_length/mean:181.793 - response_length/max:256.000 - response_length/min:62.000 - prompt_length/mean:104.781 - prompt_length/max:211.000 - prompt_length/min:63.000 - timing/gen:22.103 - timing/ref:3.386 - timing/values:3.202 - timing/adv:0.056 - timing/update_critic:9.009 - timing/update_actor:10.751 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:37 - critic/kl:0.090 - critic/kl_coeff:0.001 - critic/vf_loss:0.103 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.414 - critic/grad_norm:11.328 - critic/lr:0.000 - actor/entropy_loss:0.291 - actor/pg_loss:0.004 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.824 - actor/lr:0.000 - critic/score/mean:0.457 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.443 - critic/rewards/max:0.994 - critic/rewards/min:-0.036 - critic/advantages/mean:-0.000 - critic/advantages/max:2.719 - critic/advantages/min:-2.849 - critic/returns/mean:0.396 - critic/returns/max:1.001 - critic/returns/min:-0.036 - critic/values/mean:0.430 - critic/values/max:1.312 - critic/values/min:-0.498 - response_length/mean:178.871 - response_length/max:256.000 - response_length/min:58.000 - prompt_length/mean:102.766 - prompt_length/max:199.000 - prompt_length/min:66.000 - timing/gen:22.195 - timing/ref:3.405 - timing/values:3.201 - timing/adv:0.056 - timing/update_critic:9.014 - timing/update_actor:10.757 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:38 - critic/kl:0.093 - critic/kl_coeff:0.001 - critic/vf_loss:0.131 - critic/vf_clipfrac:0.115 - critic/vpred_mean:0.241 - critic/grad_norm:55.256 - critic/lr:0.000 - actor/entropy_loss:0.260 - actor/pg_loss:-0.011 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.001 - actor/grad_norm:2.904 - actor/lr:0.000 - critic/score/mean:0.488 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.474 - critic/rewards/max:0.996 - critic/rewards/min:-0.025 - critic/advantages/mean:0.000 - critic/advantages/max:2.554 - critic/advantages/min:-2.374 - critic/returns/mean:0.421 - critic/returns/max:1.003 - critic/returns/min:-0.025 - critic/values/mean:0.053 - critic/values/max:0.777 - critic/values/min:-0.809 - response_length/mean:178.625 - response_length/max:256.000 - response_length/min:50.000 - prompt_length/mean:105.059 - prompt_length/max:206.000 - prompt_length/min:70.000 - timing/gen:21.984 - timing/ref:3.395 - timing/values:3.208 - timing/adv:0.057 - timing/update_critic:9.015 - timing/update_actor:10.827 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m validation generation end\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>system\n", "\u001b[36m(main_task pid=28294)\u001b[0m You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>user\n", "\u001b[36m(main_task pid=28294)\u001b[0m Bob has a certain number of marbles. If he receives 2 dozen more marbles, he will have 60 marbles. If he loses 10 of the marbles he has, how many marbles will Bob have? Let's think step by step and output the final answer after \"####\".<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>assistant\n", "\u001b[36m(main_task pid=28294)\u001b[0m Let's denote the number of marbles Bob originally has as \\( x \\).\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m First, we know that if Bob receives 2 dozen more marbles, he will have 60 marbles. Since 1 dozen is equal to 12, 2 dozen is equal to \\( 2 \\times 12 = 24 \\) marbles. Therefore, we can write the equation:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ x + 24 = 60 \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m To find \\( x \\), we subtract 24 from both sides of the equation:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ x = 60 - 24 \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ x = 36 \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m So, Bob originally has 36 marbles. If he loses 10 of the marbles, he will have:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ 36 - 10 = 26 \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m #### 26<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:39 - critic/kl:0.084 - critic/kl_coeff:0.001 - critic/vf_loss:0.132 - critic/vf_clipfrac:0.105 - critic/vpred_mean:0.500 - critic/grad_norm:54.355 - critic/lr:0.000 - actor/entropy_loss:0.267 - actor/pg_loss:-0.019 - actor/pg_clipfrac:0.004 - actor/ppo_kl:-0.001 - actor/grad_norm:2.948 - actor/lr:0.000 - val/test_score/openai/gsm8k:0.454 - critic/score/mean:0.496 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.483 - critic/rewards/max:1.289 - critic/rewards/min:-0.031 - critic/advantages/mean:-0.000 - critic/advantages/max:2.662 - critic/advantages/min:-2.130 - critic/returns/mean:0.422 - critic/returns/max:1.423 - critic/returns/min:-0.031 - critic/values/mean:0.777 - critic/values/max:1.500 - critic/values/min:-0.105 - response_length/mean:185.688 - response_length/max:256.000 - response_length/min:67.000 - prompt_length/mean:104.715 - prompt_length/max:167.000 - prompt_length/min:64.000 - timing/gen:22.555 - timing/ref:3.405 - timing/values:3.163 - timing/adv:0.057 - timing/update_critic:9.071 - timing/update_actor:10.771 - timing/testing:48.743 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving actor checkpoint to checkpoints/verl_examples/gsm8k/actor/global_step_39\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving critic checkpoint to checkpoints/verl_examples/gsm8k/critic/global_step_39\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:40 - critic/kl:0.082 - critic/kl_coeff:0.001 - critic/vf_loss:0.147 - critic/vf_clipfrac:0.002 - critic/vpred_mean:0.669 - critic/grad_norm:62.551 - critic/lr:0.000 - actor/entropy_loss:0.274 - actor/pg_loss:-0.012 - actor/pg_clipfrac:0.004 - actor/ppo_kl:-0.001 - actor/grad_norm:4.891 - actor/lr:0.000 - critic/score/mean:0.453 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.440 - critic/rewards/max:1.374 - critic/rewards/min:-0.128 - critic/advantages/mean:-0.000 - critic/advantages/max:3.228 - critic/advantages/min:-2.352 - critic/returns/mean:0.381 - critic/returns/max:1.547 - critic/returns/min:-0.128 - critic/values/mean:0.383 - critic/values/max:1.055 - critic/values/min:-0.465 - response_length/mean:189.316 - response_length/max:256.000 - response_length/min:56.000 - prompt_length/mean:104.215 - prompt_length/max:215.000 - prompt_length/min:72.000 - timing/gen:23.543 - timing/ref:3.408 - timing/values:3.206 - timing/adv:0.058 - timing/update_critic:8.990 - timing/update_actor:10.767 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:41 - critic/kl:0.100 - critic/kl_coeff:0.001 - critic/vf_loss:0.116 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.438 - critic/grad_norm:27.778 - critic/lr:0.000 - actor/entropy_loss:0.235 - actor/pg_loss:-0.017 - actor/pg_clipfrac:0.003 - actor/ppo_kl:-0.000 - actor/grad_norm:3.307 - actor/lr:0.000 - critic/score/mean:0.531 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.516 - critic/rewards/max:0.997 - critic/rewards/min:-0.027 - critic/advantages/mean:-0.000 - critic/advantages/max:2.631 - critic/advantages/min:-2.307 - critic/returns/mean:0.463 - critic/returns/max:1.003 - critic/returns/min:-0.027 - critic/values/mean:0.381 - critic/values/max:1.023 - critic/values/min:-0.402 - response_length/mean:176.273 - response_length/max:256.000 - response_length/min:51.000 - prompt_length/mean:103.426 - prompt_length/max:183.000 - prompt_length/min:69.000 - timing/gen:21.996 - timing/ref:3.393 - timing/values:3.232 - timing/adv:0.056 - timing/update_critic:9.024 - timing/update_actor:10.754 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:42 - critic/kl:0.102 - critic/kl_coeff:0.001 - critic/vf_loss:0.105 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.490 - critic/grad_norm:19.715 - critic/lr:0.000 - actor/entropy_loss:0.239 - actor/pg_loss:-0.015 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:2.923 - actor/lr:0.000 - critic/score/mean:0.504 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.488 - critic/rewards/max:0.994 - critic/rewards/min:-0.034 - critic/advantages/mean:-0.000 - critic/advantages/max:2.133 - critic/advantages/min:-2.226 - critic/returns/mean:0.432 - critic/returns/max:1.001 - critic/returns/min:-0.034 - critic/values/mean:0.594 - critic/values/max:1.148 - critic/values/min:-0.068 - response_length/mean:175.164 - response_length/max:256.000 - response_length/min:47.000 - prompt_length/mean:103.246 - prompt_length/max:201.000 - prompt_length/min:68.000 - timing/gen:21.940 - timing/ref:3.386 - timing/values:3.189 - timing/adv:0.056 - timing/update_critic:9.052 - timing/update_actor:10.768 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:43 - critic/kl:0.106 - critic/kl_coeff:0.001 - critic/vf_loss:0.100 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.462 - critic/grad_norm:23.429 - critic/lr:0.000 - actor/entropy_loss:0.222 - actor/pg_loss:-0.020 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.573 - actor/lr:0.000 - critic/score/mean:0.559 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.542 - critic/rewards/max:1.001 - critic/rewards/min:-0.034 - critic/advantages/mean:0.000 - critic/advantages/max:2.340 - critic/advantages/min:-2.101 - critic/returns/mean:0.470 - critic/returns/max:1.003 - critic/returns/min:-0.034 - critic/values/mean:0.559 - critic/values/max:1.211 - critic/values/min:-0.291 - response_length/mean:176.883 - response_length/max:256.000 - response_length/min:71.000 - prompt_length/mean:104.879 - prompt_length/max:232.000 - prompt_length/min:64.000 - timing/gen:22.334 - timing/ref:3.384 - timing/values:3.175 - timing/adv:0.058 - timing/update_critic:9.083 - timing/update_actor:10.772 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:44 - critic/kl:0.103 - critic/kl_coeff:0.001 - critic/vf_loss:0.097 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.464 - critic/grad_norm:20.556 - critic/lr:0.000 - actor/entropy_loss:0.231 - actor/pg_loss:-0.020 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.748 - actor/lr:0.000 - critic/score/mean:0.508 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.492 - critic/rewards/max:0.997 - critic/rewards/min:-0.028 - critic/advantages/mean:-0.000 - critic/advantages/max:2.425 - critic/advantages/min:-2.328 - critic/returns/mean:0.433 - critic/returns/max:1.003 - critic/returns/min:-0.028 - critic/values/mean:0.555 - critic/values/max:1.250 - critic/values/min:-0.227 - response_length/mean:172.953 - response_length/max:256.000 - response_length/min:73.000 - prompt_length/mean:105.375 - prompt_length/max:178.000 - prompt_length/min:69.000 - timing/gen:21.645 - timing/ref:3.378 - timing/values:3.219 - timing/adv:0.057 - timing/update_critic:9.054 - timing/update_actor:11.071 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:45 - critic/kl:0.110 - critic/kl_coeff:0.001 - critic/vf_loss:0.095 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.501 - critic/grad_norm:17.849 - critic/lr:0.000 - actor/entropy_loss:0.254 - actor/pg_loss:-0.023 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:4.222 - actor/lr:0.000 - critic/score/mean:0.555 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.538 - critic/rewards/max:0.992 - critic/rewards/min:-0.028 - critic/advantages/mean:-0.000 - critic/advantages/max:2.372 - critic/advantages/min:-2.466 - critic/returns/mean:0.476 - critic/returns/max:1.082 - critic/returns/min:-0.028 - critic/values/mean:0.590 - critic/values/max:1.406 - critic/values/min:-0.508 - response_length/mean:173.477 - response_length/max:256.000 - response_length/min:59.000 - prompt_length/mean:102.488 - prompt_length/max:238.000 - prompt_length/min:69.000 - timing/gen:21.763 - timing/ref:3.466 - timing/values:3.225 - timing/adv:0.057 - timing/update_critic:9.034 - timing/update_actor:10.801 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:46 - critic/kl:0.114 - critic/kl_coeff:0.001 - critic/vf_loss:0.095 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.498 - critic/grad_norm:11.832 - critic/lr:0.000 - actor/entropy_loss:0.233 - actor/pg_loss:-0.010 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.825 - actor/lr:0.000 - critic/score/mean:0.512 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.494 - critic/rewards/max:0.992 - critic/rewards/min:-0.034 - critic/advantages/mean:-0.000 - critic/advantages/max:3.366 - critic/advantages/min:-2.487 - critic/returns/mean:0.444 - critic/returns/max:1.002 - critic/returns/min:-0.034 - critic/values/mean:0.621 - critic/values/max:1.359 - critic/values/min:-0.322 - response_length/mean:173.594 - response_length/max:256.000 - response_length/min:71.000 - prompt_length/mean:104.762 - prompt_length/max:193.000 - prompt_length/min:65.000 - timing/gen:21.762 - timing/ref:3.370 - timing/values:3.120 - timing/adv:0.057 - timing/update_critic:9.026 - timing/update_actor:10.818 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:47 - critic/kl:0.114 - critic/kl_coeff:0.001 - critic/vf_loss:0.114 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.353 - critic/grad_norm:28.083 - critic/lr:0.000 - actor/entropy_loss:0.226 - actor/pg_loss:-0.002 - actor/pg_clipfrac:0.003 - actor/ppo_kl:-0.000 - actor/grad_norm:2.908 - actor/lr:0.000 - critic/score/mean:0.531 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.514 - critic/rewards/max:0.998 - critic/rewards/min:-0.044 - critic/advantages/mean:-0.000 - critic/advantages/max:2.695 - critic/advantages/min:-2.506 - critic/returns/mean:0.453 - critic/returns/max:1.001 - critic/returns/min:-0.044 - critic/values/mean:0.326 - critic/values/max:0.992 - critic/values/min:-0.559 - response_length/mean:172.051 - response_length/max:256.000 - response_length/min:57.000 - prompt_length/mean:102.000 - prompt_length/max:179.000 - prompt_length/min:67.000 - timing/gen:22.352 - timing/ref:3.478 - timing/values:3.231 - timing/adv:0.057 - timing/update_critic:9.098 - timing/update_actor:10.837 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:48 - critic/kl:0.106 - critic/kl_coeff:0.001 - critic/vf_loss:0.116 - critic/vf_clipfrac:0.010 - critic/vpred_mean:0.528 - critic/grad_norm:37.496 - critic/lr:0.000 - actor/entropy_loss:0.218 - actor/pg_loss:-0.008 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:2.889 - actor/lr:0.000 - critic/score/mean:0.500 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.483 - critic/rewards/max:0.994 - critic/rewards/min:-0.031 - critic/advantages/mean:-0.000 - critic/advantages/max:2.590 - critic/advantages/min:-2.222 - critic/returns/mean:0.422 - critic/returns/max:1.003 - critic/returns/min:-0.031 - critic/values/mean:0.652 - critic/values/max:1.352 - critic/values/min:-0.207 - response_length/mean:179.723 - response_length/max:256.000 - response_length/min:62.000 - prompt_length/mean:103.219 - prompt_length/max:180.000 - prompt_length/min:65.000 - timing/gen:22.223 - timing/ref:3.493 - timing/values:3.271 - timing/adv:0.057 - timing/update_critic:9.073 - timing/update_actor:10.817 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.000\n", "\u001b[36m(main_task pid=28294)\u001b[0m validation generation end\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>system\n", "\u001b[36m(main_task pid=28294)\u001b[0m You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>user\n", "\u001b[36m(main_task pid=28294)\u001b[0m Mr Hezekiah had 20 trucks from his store supplying fertiliser to different farmers in his hometown dispatched for delivery on a particular day. Each truck was carrying 20 tons of fertiliser packed in bags. Two hours after the trucks had departed for delivery, Mr Hezekiah got the news that a quarter of the number of lorries dispatched for delivery had mechanical failures on the road and could not deliver the fertilisers to the farmers. Calculate the total number of tons of fertiliser that reached the farmers that day? Let's think step by step and output the final answer after \"####\".<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>assistant\n", "\u001b[36m(main_task pid=28294)\u001b[0m Let's break down the problem step by step:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 1. **Initial number of trucks**: 20 trucks\n", "\u001b[36m(main_task pid=28294)\u001b[0m 2. **Fertiliser per truck**: 20 tons\n", "\u001b[36m(main_task pid=28294)\u001b[0m 3. **Total fertiliser**: \\(20 \\text{ trucks} \\times 20 \\text{ tons/truck} = 400 \\text{ tons}\\)\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 4. **Number of trucks with mechanical failures**: \\(20 \\text{ trucks} \\times \\frac{1}{4} = 5 \\text{ trucks}\\)\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m 5. **Fertiliser delivered to farmers**: \\(400 \\text{ tons} - 5 \\text{ trucks} \\times 20 \\text{ tons/truck} = 400 \\text{ tons} - 100 \\text{ tons} = 300 \\text{ tons}\\)\n", "\u001b[36m(main_task pid=28294)\u001b[0m \n", "\u001b[36m(main_task pid=28294)\u001b[0m #### Final Answer:\n", "\u001b[36m(main_task pid=28294)\u001b[0m #### 300<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:49 - critic/kl:0.120 - critic/kl_coeff:0.001 - critic/vf_loss:0.119 - critic/vf_clipfrac:0.009 - critic/vpred_mean:0.338 - critic/grad_norm:32.179 - critic/lr:0.000 - actor/entropy_loss:0.224 - actor/pg_loss:-0.034 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:3.854 - actor/lr:0.000 - val/test_score/openai/gsm8k:0.466 - critic/score/mean:0.484 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.466 - critic/rewards/max:0.994 - critic/rewards/min:-0.031 - critic/advantages/mean:0.000 - critic/advantages/max:3.497 - critic/advantages/min:-2.368 - critic/returns/mean:0.411 - critic/returns/max:1.003 - critic/returns/min:-0.031 - critic/values/mean:0.094 - critic/values/max:0.820 - critic/values/min:-0.879 - response_length/mean:171.238 - response_length/max:256.000 - response_length/min:66.000 - prompt_length/mean:104.562 - prompt_length/max:169.000 - prompt_length/min:68.000 - timing/gen:21.997 - timing/ref:3.408 - timing/values:3.209 - timing/adv:0.056 - timing/update_critic:8.998 - timing/update_actor:10.805 - timing/testing:47.274 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving actor checkpoint to checkpoints/verl_examples/gsm8k/actor/global_step_49\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving critic checkpoint to checkpoints/verl_examples/gsm8k/critic/global_step_49\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:50 - critic/kl:0.122 - critic/kl_coeff:0.001 - critic/vf_loss:0.129 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.658 - critic/grad_norm:41.948 - critic/lr:0.000 - actor/entropy_loss:0.220 - actor/pg_loss:-0.009 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:3.246 - actor/lr:0.000 - critic/score/mean:0.539 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.520 - critic/rewards/max:0.995 - critic/rewards/min:-0.038 - critic/advantages/mean:-0.000 - critic/advantages/max:2.751 - critic/advantages/min:-2.409 - critic/returns/mean:0.463 - critic/returns/max:1.002 - critic/returns/min:-0.038 - critic/values/mean:0.432 - critic/values/max:1.086 - critic/values/min:-0.445 - response_length/mean:172.555 - response_length/max:256.000 - response_length/min:65.000 - prompt_length/mean:102.969 - prompt_length/max:196.000 - prompt_length/min:68.000 - timing/gen:22.668 - timing/ref:3.455 - timing/values:3.235 - timing/adv:0.057 - timing/update_critic:9.077 - timing/update_actor:10.784 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:51 - critic/kl:0.130 - critic/kl_coeff:0.001 - critic/vf_loss:0.113 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.454 - critic/grad_norm:33.814 - critic/lr:0.000 - actor/entropy_loss:0.230 - actor/pg_loss:-0.012 - actor/pg_clipfrac:0.004 - actor/ppo_kl:0.001 - actor/grad_norm:10.359 - actor/lr:0.000 - critic/score/mean:0.539 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.518 - critic/rewards/max:0.993 - critic/rewards/min:-0.035 - critic/advantages/mean:0.000 - critic/advantages/max:2.335 - critic/advantages/min:-2.426 - critic/returns/mean:0.468 - critic/returns/max:1.196 - critic/returns/min:-0.039 - critic/values/mean:0.711 - critic/values/max:1.383 - critic/values/min:-0.250 - response_length/mean:169.363 - response_length/max:256.000 - response_length/min:62.000 - prompt_length/mean:104.457 - prompt_length/max:174.000 - prompt_length/min:64.000 - timing/gen:21.843 - timing/ref:3.442 - timing/values:3.255 - timing/adv:0.057 - timing/update_critic:9.358 - timing/update_actor:10.775 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:52 - critic/kl:0.124 - critic/kl_coeff:0.001 - critic/vf_loss:0.112 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.614 - critic/grad_norm:38.258 - critic/lr:0.000 - actor/entropy_loss:0.190 - actor/pg_loss:-0.015 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.922 - actor/lr:0.000 - critic/score/mean:0.559 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.539 - critic/rewards/max:0.993 - critic/rewards/min:-0.036 - critic/advantages/mean:-0.000 - critic/advantages/max:2.523 - critic/advantages/min:-2.443 - critic/returns/mean:0.494 - critic/returns/max:1.000 - critic/returns/min:-0.036 - critic/values/mean:0.703 - critic/values/max:1.430 - critic/values/min:-0.266 - response_length/mean:170.574 - response_length/max:256.000 - response_length/min:61.000 - prompt_length/mean:103.676 - prompt_length/max:217.000 - prompt_length/min:68.000 - timing/gen:21.955 - timing/ref:3.386 - timing/values:3.169 - timing/adv:0.057 - timing/update_critic:9.108 - timing/update_actor:10.752 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:53 - critic/kl:0.127 - critic/kl_coeff:0.001 - critic/vf_loss:0.104 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.467 - critic/grad_norm:27.255 - critic/lr:0.000 - actor/entropy_loss:0.189 - actor/pg_loss:-0.007 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.912 - actor/lr:0.000 - critic/score/mean:0.570 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.551 - critic/rewards/max:0.993 - critic/rewards/min:-0.042 - critic/advantages/mean:0.000 - critic/advantages/max:2.255 - critic/advantages/min:-2.197 - critic/returns/mean:0.498 - critic/returns/max:1.001 - critic/returns/min:-0.044 - critic/values/mean:0.233 - critic/values/max:0.918 - critic/values/min:-0.551 - response_length/mean:167.039 - response_length/max:256.000 - response_length/min:68.000 - prompt_length/mean:103.355 - prompt_length/max:172.000 - prompt_length/min:65.000 - timing/gen:21.915 - timing/ref:3.404 - timing/values:3.129 - timing/adv:0.057 - timing/update_critic:9.042 - timing/update_actor:10.777 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:54 - critic/kl:0.132 - critic/kl_coeff:0.001 - critic/vf_loss:0.121 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.266 - critic/grad_norm:44.310 - critic/lr:0.000 - actor/entropy_loss:0.211 - actor/pg_loss:-0.019 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:2.932 - actor/lr:0.000 - critic/score/mean:0.539 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.519 - critic/rewards/max:0.994 - critic/rewards/min:-0.033 - critic/advantages/mean:0.000 - critic/advantages/max:2.432 - critic/advantages/min:-2.403 - critic/returns/mean:0.450 - critic/returns/max:1.003 - critic/returns/min:-0.033 - critic/values/mean:0.508 - critic/values/max:1.195 - critic/values/min:-0.453 - response_length/mean:168.863 - response_length/max:256.000 - response_length/min:59.000 - prompt_length/mean:103.875 - prompt_length/max:170.000 - prompt_length/min:69.000 - timing/gen:21.853 - timing/ref:3.451 - timing/values:3.251 - timing/adv:0.056 - timing/update_critic:9.024 - timing/update_actor:10.781 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:55 - critic/kl:0.172 - critic/kl_coeff:0.001 - critic/vf_loss:0.091 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.502 - critic/grad_norm:10.118 - critic/lr:0.000 - actor/entropy_loss:0.206 - actor/pg_loss:-0.022 - actor/pg_clipfrac:0.004 - actor/ppo_kl:-0.000 - actor/grad_norm:9.115 - actor/lr:0.000 - critic/score/mean:0.559 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.533 - critic/rewards/max:0.990 - critic/rewards/min:-0.136 - critic/advantages/mean:0.000 - critic/advantages/max:2.456 - critic/advantages/min:-2.538 - critic/returns/mean:0.478 - critic/returns/max:1.003 - critic/returns/min:-0.136 - critic/values/mean:0.494 - critic/values/max:1.188 - critic/values/min:-0.424 - response_length/mean:159.633 - response_length/max:256.000 - response_length/min:8.000 - prompt_length/mean:103.645 - prompt_length/max:199.000 - prompt_length/min:63.000 - timing/gen:21.694 - timing/ref:3.410 - timing/values:3.198 - timing/adv:0.057 - timing/update_critic:9.154 - timing/update_actor:10.823 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:56 - critic/kl:0.150 - critic/kl_coeff:0.001 - critic/vf_loss:0.124 - critic/vf_clipfrac:0.001 - critic/vpred_mean:0.584 - critic/grad_norm:50.596 - critic/lr:0.000 - actor/entropy_loss:0.200 - actor/pg_loss:0.016 - actor/pg_clipfrac:0.004 - actor/ppo_kl:0.000 - actor/grad_norm:3.316 - actor/lr:0.000 - critic/score/mean:0.473 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.451 - critic/rewards/max:0.991 - critic/rewards/min:-0.037 - critic/advantages/mean:-0.000 - critic/advantages/max:2.788 - critic/advantages/min:-2.548 - critic/returns/mean:0.399 - critic/returns/max:1.001 - critic/returns/min:-0.037 - critic/values/mean:0.707 - critic/values/max:1.484 - critic/values/min:-0.285 - response_length/mean:161.859 - response_length/max:256.000 - response_length/min:56.000 - prompt_length/mean:103.473 - prompt_length/max:190.000 - prompt_length/min:67.000 - timing/gen:21.750 - timing/ref:3.452 - timing/values:3.189 - timing/adv:0.055 - timing/update_critic:8.992 - timing/update_actor:10.782 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:57 - critic/kl:0.173 - critic/kl_coeff:0.001 - critic/vf_loss:0.139 - critic/vf_clipfrac:0.064 - critic/vpred_mean:0.498 - critic/grad_norm:42.923 - critic/lr:0.000 - actor/entropy_loss:0.202 - actor/pg_loss:-0.014 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:6.863 - actor/lr:0.000 - critic/score/mean:0.551 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.524 - critic/rewards/max:0.994 - critic/rewards/min:-0.458 - critic/advantages/mean:0.000 - critic/advantages/max:2.363 - critic/advantages/min:-3.153 - critic/returns/mean:0.465 - critic/returns/max:1.001 - critic/returns/min:-0.458 - critic/values/mean:0.200 - critic/values/max:0.961 - critic/values/min:-0.668 - response_length/mean:155.484 - response_length/max:256.000 - response_length/min:58.000 - prompt_length/mean:102.148 - prompt_length/max:194.000 - prompt_length/min:65.000 - timing/gen:21.281 - timing/ref:3.433 - timing/values:3.230 - timing/adv:0.056 - timing/update_critic:9.241 - timing/update_actor:10.909 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:58 - critic/kl:0.164 - critic/kl_coeff:0.001 - critic/vf_loss:0.207 - critic/vf_clipfrac:0.044 - critic/vpred_mean:0.939 - critic/grad_norm:84.893 - critic/lr:0.000 - actor/entropy_loss:0.196 - actor/pg_loss:-0.004 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.000 - actor/grad_norm:3.283 - actor/lr:0.000 - critic/score/mean:0.547 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.525 - critic/rewards/max:0.993 - critic/rewards/min:-0.036 - critic/advantages/mean:0.000 - critic/advantages/max:2.246 - critic/advantages/min:-2.305 - critic/returns/mean:0.474 - critic/returns/max:1.001 - critic/returns/min:-0.036 - critic/values/mean:1.102 - critic/values/max:1.812 - critic/values/min:0.367 - response_length/mean:153.152 - response_length/max:256.000 - response_length/min:35.000 - prompt_length/mean:103.617 - prompt_length/max:206.000 - prompt_length/min:66.000 - timing/gen:21.591 - timing/ref:3.401 - timing/values:3.106 - timing/adv:0.056 - timing/update_critic:8.997 - timing/update_actor:10.832 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(main_task pid=28294)\u001b[0m validation generation end\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>system\n", "\u001b[36m(main_task pid=28294)\u001b[0m You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>user\n", "\u001b[36m(main_task pid=28294)\u001b[0m John fills a 6 foot by 4 foot pool that is 5 feet deep. It cost $.1 per cubic foot to fill. How much does it cost to fill? Let's think step by step and output the final answer after \"####\".<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m <|im_start|>assistant\n", "\u001b[36m(main_task pid=28294)\u001b[0m First, we calculate the volume of the pool. The volume \\( V \\) of a rectangular prism is given by the formula:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ V = \\text{length} \\times \\text{width} \\times \\text{height} \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m Here, the length is 6 feet, the width is 4 feet, and the height is 5 feet. So,\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ V = 6 \\times 4 \\times 5 = 120 \\text{ cubic feet} \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m Next, we calculate the cost to fill the pool. The cost is $0.1 per cubic foot, so the total cost \\( C \\) is:\n", "\u001b[36m(main_task pid=28294)\u001b[0m \\[ C = 120 \\times 0.1 = 12 \\text{ dollars} \\]\n", "\u001b[36m(main_task pid=28294)\u001b[0m #### 12<|im_end|>\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:59 - critic/kl:0.167 - critic/kl_coeff:0.001 - critic/vf_loss:0.227 - critic/vf_clipfrac:0.000 - critic/vpred_mean:0.058 - critic/grad_norm:97.710 - critic/lr:0.000 - actor/entropy_loss:0.178 - actor/pg_loss:-0.008 - actor/pg_clipfrac:0.003 - actor/ppo_kl:-0.000 - actor/grad_norm:3.323 - actor/lr:0.000 - val/test_score/openai/gsm8k:0.478 - critic/score/mean:0.625 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.603 - critic/rewards/max:0.990 - critic/rewards/min:-0.037 - critic/advantages/mean:0.000 - critic/advantages/max:1.931 - critic/advantages/min:-2.355 - critic/returns/mean:0.548 - critic/returns/max:1.001 - critic/returns/min:-0.037 - critic/values/mean:0.215 - critic/values/max:0.871 - critic/values/min:-0.402 - response_length/mean:148.621 - response_length/max:256.000 - response_length/min:47.000 - prompt_length/mean:102.344 - prompt_length/max:170.000 - prompt_length/min:56.000 - timing/gen:21.616 - timing/ref:3.366 - timing/values:3.169 - timing/adv:0.056 - timing/update_critic:8.991 - timing/update_actor:10.783 - timing/testing:43.049 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving actor checkpoint to checkpoints/verl_examples/gsm8k/actor/global_step_59\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m Saving critic checkpoint to checkpoints/verl_examples/gsm8k/critic/global_step_59\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:689: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:773: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py:716: UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.\n", "\u001b[36m(WorkerDict pid=28545)\u001b[0m warnings.warn(\n", "\u001b[36m(main_task pid=28294)\u001b[0m step:60 - critic/kl:0.161 - critic/kl_coeff:0.001 - critic/vf_loss:0.138 - critic/vf_clipfrac:0.208 - critic/vpred_mean:0.673 - critic/grad_norm:59.237 - critic/lr:0.000 - actor/entropy_loss:0.173 - actor/pg_loss:-0.015 - actor/pg_clipfrac:0.003 - actor/ppo_kl:0.001 - actor/grad_norm:3.070 - actor/lr:0.000 - critic/score/mean:0.621 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.598 - critic/rewards/max:0.993 - critic/rewards/min:-0.043 - critic/advantages/mean:-0.000 - critic/advantages/max:1.835 - critic/advantages/min:-2.245 - critic/returns/mean:0.562 - critic/returns/max:1.001 - critic/returns/min:-0.043 - critic/values/mean:0.293 - critic/values/max:0.789 - critic/values/min:-0.336 - response_length/mean:155.738 - response_length/max:256.000 - response_length/min:50.000 - prompt_length/mean:103.066 - prompt_length/max:176.000 - prompt_length/min:67.000 - timing/gen:22.844 - timing/ref:3.415 - timing/values:3.202 - timing/adv:0.055 - timing/update_critic:9.302 - timing/update_actor:10.783 - timing_per_token/values:0.000 - timing_per_token/ref:0.000 - timing_per_token/update_critic:0.000 - timing_per_token/update_actor:0.000 - timing_per_token/adv:0.000 - timing_per_token/gen:0.001\n", "^C\n" ] } ], "source": [ "!PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \\\n", " data.train_files=$HOME/data/gsm8k/train.parquet \\\n", " data.val_files=$HOME/data/gsm8k/test.parquet \\\n", " data.train_batch_size=256 \\\n", " data.max_prompt_length=512 \\\n", " data.max_response_length=256 \\\n", " actor_rollout_ref.model.path=$HOME/models/Qwen2.5-0.5B-Instruct \\\n", " actor_rollout_ref.actor.optim.lr=1e-6 \\\n", " actor_rollout_ref.actor.ppo_mini_batch_size=64 \\\n", " actor_rollout_ref.actor.ppo_micro_batch_size=1 \\\n", " actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=1 \\\n", " actor_rollout_ref.rollout.tensor_model_parallel_size=1 \\\n", " actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \\\n", " actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 \\\n", " critic.optim.lr=1e-5 \\\n", " critic.model.path=$HOME/models/Qwen2.5-0.5B-Instruct \\\n", " critic.ppo_micro_batch_size=1 \\\n", " algorithm.kl_ctrl.kl_coef=0.001 \\\n", " trainer.val_before_train=False \\\n", " trainer.default_hdfs_dir=null \\\n", " trainer.n_gpus_per_node=1 \\\n", " trainer.nnodes=1 \\\n", " trainer.save_freq=10 \\\n", " trainer.test_freq=10 \\\n", " trainer.total_epochs=15 \\\n", " trainer.logger=\\[console\\]" ] }, { "cell_type": "markdown", "metadata": { "id": "zSn7lNlZ2vfL" }, "source": [ "# Stop and clean up resources" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "QuJ-LgdTAPkb", "outputId": "64f2ef75-4a6d-4a62-922e-3d09b33a8a44" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Did not find any active Ray processes.\n", "\u001b[0m" ] } ], "source": [ "!ray stop" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "accelerator": "GPU", "colab": { "gpuType": "T4", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.10" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "06873240926949d98e13872546c5231d": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "06e1b9b5d49d4ee3ab8d1a523659bcbf": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "07f455e5e6dd45b7ba52f78bfc7ec7d6": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "0ab915aba5e14e5bba7ba1c22a682b89": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "0c9b8ffe4b8c4b5ca72a21cc54a1feb9": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "0eeef594fb564491ad8d80f86a8fbfdc": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_9fbafd9fc26748b7889b5c52600f80a8", "placeholder": "​", "style": "IPY_MODEL_889e01d618544f7c9a9d748730255007", "value": " 242/242 [00:00<00:00, 15.3kB/s]" } }, "144df34a87334a6d8eb13055e7a9b9e4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_1e9ee1c383074f638a688b029d72bc79", "IPY_MODEL_5cfeadb8ff394f38ac2e23f1a66beeb3", "IPY_MODEL_0eeef594fb564491ad8d80f86a8fbfdc" ], "layout": "IPY_MODEL_771c5ca9460f4539b30f452dd3f36b12" } }, "1491cbb53c6e4bfb9d17cf123dea83dd": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "1a382959fdeb4554827397823284d2fa": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_f52d7af1a82249a3aa7785476e10c2ad", "IPY_MODEL_afcc65785fef4b71b03ac83a4b14d97f", "IPY_MODEL_c0b19ca098a443598c662921832e8799" ], "layout": "IPY_MODEL_ca24445f8af44c8397f12d15d66eebf5" } }, "1e9ee1c383074f638a688b029d72bc79": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_fab6aab315214fcb884529a4dbf84fe5", "placeholder": "​", "style": "IPY_MODEL_06e1b9b5d49d4ee3ab8d1a523659bcbf", "value": "generation_config.json: 100%" } }, "252e651f687e47f3bd20518f2ac5fb9f": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "2749b87567ea4b6cbc4cf825e2282615": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "2babfcd555104f9d8ecf98e164ec40fc": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "3447ed64518746cabb0176348fc88d96": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "35bacfb8aa4c4a25bf8ce2d13a00f2b8": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "35e124a16d2945ddbb3ade95ef2b5519": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "3e1dd2fd3bb049ab83aa987d748f5b9e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "40259328dd5e4256939d7b1a3f038d98": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_866c770e39b64dfd9764de755f6a9ec5", "max": 1671839, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_2babfcd555104f9d8ecf98e164ec40fc", "value": 1671839 } }, "412349b6e00b4994bc3f63f8405b3ec2": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_be14bccf9f114d9f839c805afef08f61", "max": 988097824, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_52268a2d657b4e19badd66f781f68d93", "value": 988097824 } }, "4957b3690466495997721afab68ad93a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "4bdbe0a8bb434bfc8e2172ecb5189705": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_35e124a16d2945ddbb3ade95ef2b5519", "placeholder": "​", "style": "IPY_MODEL_7de86c10755f4e0da7974bdf1815a85d", "value": "tokenizer_config.json: 100%" } }, "4d1a260957214732940766c874d3a02b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_e6b66ca90c9c4b0ead5153e4a07cdc86", "placeholder": "​", "style": "IPY_MODEL_3e1dd2fd3bb049ab83aa987d748f5b9e", "value": "tokenizer.json: 100%" } }, "5110a596739443a8a640cfd50030644b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_07f455e5e6dd45b7ba52f78bfc7ec7d6", "max": 659, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_fdf06125a50249b8878dbf01993306f4", "value": 659 } }, "52268a2d657b4e19badd66f781f68d93": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "538e82daa19140098a4053da6e23de45": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "5c8c3c4d700540f089f671d4f5d0dd9f": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "5cfeadb8ff394f38ac2e23f1a66beeb3": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_e3848f0a11f8472fba3ecb624bc86dd9", "max": 242, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_c7b67dd574ad4c15b36930047553e9d3", "value": 242 } }, "645fee7bcccd42a794e4aa889c1fe145": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_aa19071cede44a089d7f3b19227d51e0", "IPY_MODEL_412349b6e00b4994bc3f63f8405b3ec2", "IPY_MODEL_a921f9b0d3c74381b75aa60f4d1cac1c" ], "layout": "IPY_MODEL_b707bf4c56744f05ac9245b07f6d1788" } }, "69e57962129241a689cfd2933b64127c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_4bdbe0a8bb434bfc8e2172ecb5189705", "IPY_MODEL_b0bbbf7f9f264dfda2c0d6775567e446", "IPY_MODEL_6c9485ecc56f4027ad8f3824554e3968" ], "layout": "IPY_MODEL_3447ed64518746cabb0176348fc88d96" } }, "6b14f827b15f4e34be6590a5d2085b64": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "6c9485ecc56f4027ad8f3824554e3968": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_b10402691cc3480693dcf49d19336c72", "placeholder": "​", "style": "IPY_MODEL_f0350562775a4c4ca83772a78d05122b", "value": " 7.30k/7.30k [00:00<00:00, 497kB/s]" } }, "6cd310d2188d424eb20c3bf83ac34f56": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "6f3742161c4f4bcc891c82aff7ece69f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "7363ebea3a3a4f55b69b2d813c3b2fa5": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_e9f9be6fa1744f3380d21c451bc81555", "placeholder": "​", "style": "IPY_MODEL_c5024f35870446a0ae8fd747101ab719", "value": " 7.03M/7.03M [00:01<00:00, 6.03MB/s]" } }, "763679906f9248a7a5f4c8de952d98ae": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_b319db13c64b43a38250342c81708f70", "IPY_MODEL_5110a596739443a8a640cfd50030644b", "IPY_MODEL_e93bf508749940909c3233904e898497" ], "layout": "IPY_MODEL_b45a42483d64410ba245feda17ae3e16" } }, "771c5ca9460f4539b30f452dd3f36b12": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "7920655156a44e629514673dde2b9663": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "7c45a87d87f44b2384a4fd316ae36663": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "7de86c10755f4e0da7974bdf1815a85d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "835a6a0a56554d158ee40ccc5ccdffc5": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "866c770e39b64dfd9764de755f6a9ec5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "889e01d618544f7c9a9d748730255007": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "89a180c90767474b8e699e264620666e": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_a1255e85757e495a86ae366857fb64f1", "max": 7031645, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_6f3742161c4f4bcc891c82aff7ece69f", "value": 7031645 } }, "936061cb57ad445195efc0aa24dd8d66": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "9e2c1dcd2cd643bbb941d6697fcc75a0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "9fbafd9fc26748b7889b5c52600f80a8": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "a1255e85757e495a86ae366857fb64f1": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "a726ef24d10c42bf859e4c76cebde672": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_5c8c3c4d700540f089f671d4f5d0dd9f", "placeholder": "​", "style": "IPY_MODEL_7c45a87d87f44b2384a4fd316ae36663", "value": "merges.txt: 100%" } }, "a921f9b0d3c74381b75aa60f4d1cac1c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_06873240926949d98e13872546c5231d", "placeholder": "​", "style": "IPY_MODEL_936061cb57ad445195efc0aa24dd8d66", "value": " 988M/988M [00:24<00:00, 40.9MB/s]" } }, "aa19071cede44a089d7f3b19227d51e0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_252e651f687e47f3bd20518f2ac5fb9f", "placeholder": "​", "style": "IPY_MODEL_835a6a0a56554d158ee40ccc5ccdffc5", "value": "model.safetensors: 100%" } }, "afcc65785fef4b71b03ac83a4b14d97f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_e49f1b46d8ae4c3e8f894b1f411922b9", "max": 2776833, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_0c9b8ffe4b8c4b5ca72a21cc54a1feb9", "value": 2776833 } }, "b0bbbf7f9f264dfda2c0d6775567e446": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_4957b3690466495997721afab68ad93a", "max": 7305, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_9e2c1dcd2cd643bbb941d6697fcc75a0", "value": 7305 } }, "b10402691cc3480693dcf49d19336c72": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "b319db13c64b43a38250342c81708f70": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_538e82daa19140098a4053da6e23de45", "placeholder": "​", "style": "IPY_MODEL_6b14f827b15f4e34be6590a5d2085b64", "value": "config.json: 100%" } }, "b45a42483d64410ba245feda17ae3e16": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "b707bf4c56744f05ac9245b07f6d1788": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "be14bccf9f114d9f839c805afef08f61": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "c0b19ca098a443598c662921832e8799": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_c3651669cb084d86b9b8c427c665d185", "placeholder": "​", "style": "IPY_MODEL_35bacfb8aa4c4a25bf8ce2d13a00f2b8", "value": " 2.78M/2.78M [00:01<00:00, 2.42MB/s]" } }, "c0e97dba53284330b0fb8cefc852d552": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_4d1a260957214732940766c874d3a02b", "IPY_MODEL_89a180c90767474b8e699e264620666e", "IPY_MODEL_7363ebea3a3a4f55b69b2d813c3b2fa5" ], "layout": "IPY_MODEL_d49791321218419d8b7af314dd904777" } }, "c1020ed4d8a44747838ed59287d284ed": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_a726ef24d10c42bf859e4c76cebde672", "IPY_MODEL_40259328dd5e4256939d7b1a3f038d98", "IPY_MODEL_ee0b85738cbf4376a6427fadbdecfad7" ], "layout": "IPY_MODEL_1491cbb53c6e4bfb9d17cf123dea83dd" } }, "c24df93c305e42cdbaed3d6111d72010": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "c3651669cb084d86b9b8c427c665d185": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "c5024f35870446a0ae8fd747101ab719": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "c7b67dd574ad4c15b36930047553e9d3": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "ca24445f8af44c8397f12d15d66eebf5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "d49791321218419d8b7af314dd904777": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "ddecda628c6a4a5680b4241633153ebd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "e3848f0a11f8472fba3ecb624bc86dd9": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "e49f1b46d8ae4c3e8f894b1f411922b9": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "e6b66ca90c9c4b0ead5153e4a07cdc86": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "e93bf508749940909c3233904e898497": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_0ab915aba5e14e5bba7ba1c22a682b89", "placeholder": "​", "style": "IPY_MODEL_2749b87567ea4b6cbc4cf825e2282615", "value": " 659/659 [00:00<00:00, 27.5kB/s]" } }, "e9f9be6fa1744f3380d21c451bc81555": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "ee0b85738cbf4376a6427fadbdecfad7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_7920655156a44e629514673dde2b9663", "placeholder": "​", "style": "IPY_MODEL_c24df93c305e42cdbaed3d6111d72010", "value": " 1.67M/1.67M [00:00<00:00, 1.93MB/s]" } }, "f0350562775a4c4ca83772a78d05122b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "f52d7af1a82249a3aa7785476e10c2ad": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_6cd310d2188d424eb20c3bf83ac34f56", "placeholder": "​", "style": "IPY_MODEL_ddecda628c6a4a5680b4241633153ebd", "value": "vocab.json: 100%" } }, "fab6aab315214fcb884529a4dbf84fe5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "fdf06125a50249b8878dbf01993306f4": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } } } } }, "nbformat": 4, "nbformat_minor": 4 }