Unverified Commit 7251bb4f authored by dependabot[bot]'s avatar dependabot[bot] Committed by GitHub
Browse files

Bump urllib3 from 2.2.3 to 2.5.0 in /examples/server (#11748)

Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.2.3 to 2.5.0.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.2.3...2.5.0

)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-version: 2.5.0
  dependency-type: indirect
...
Signed-off-by: default avatardependabot[bot] <support@github.com>
Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
parent 3fba74e1
...@@ -10,6 +10,8 @@ annotated-types==0.7.0 ...@@ -10,6 +10,8 @@ annotated-types==0.7.0
# via pydantic # via pydantic
anyio==4.6.2.post1 anyio==4.6.2.post1
# via starlette # via starlette
async-timeout==4.0.3
# via aiohttp
attrs==24.2.0 attrs==24.2.0
# via aiohttp # via aiohttp
certifi==2024.8.30 certifi==2024.8.30
...@@ -18,6 +20,8 @@ charset-normalizer==3.4.0 ...@@ -18,6 +20,8 @@ charset-normalizer==3.4.0
# via requests # via requests
click==8.1.7 click==8.1.7
# via uvicorn # via uvicorn
exceptiongroup==1.3.0
# via anyio
fastapi==0.115.3 fastapi==0.115.3
# via -r requirements.in # via -r requirements.in
filelock==3.16.1 filelock==3.16.1
...@@ -25,6 +29,7 @@ filelock==3.16.1 ...@@ -25,6 +29,7 @@ filelock==3.16.1
# huggingface-hub # huggingface-hub
# torch # torch
# transformers # transformers
# triton
frozenlist==1.5.0 frozenlist==1.5.0
# via # via
# aiohttp # aiohttp
...@@ -54,10 +59,41 @@ multidict==6.1.0 ...@@ -54,10 +59,41 @@ multidict==6.1.0
# via # via
# aiohttp # aiohttp
# yarl # yarl
networkx==3.4.2 networkx==3.2.1
# via torch # via torch
numpy==2.1.2 numpy==2.0.2
# via transformers # via transformers
nvidia-cublas-cu12==12.1.3.1
# via
# nvidia-cudnn-cu12
# nvidia-cusolver-cu12
# torch
nvidia-cuda-cupti-cu12==12.1.105
# via torch
nvidia-cuda-nvrtc-cu12==12.1.105
# via torch
nvidia-cuda-runtime-cu12==12.1.105
# via torch
nvidia-cudnn-cu12==9.1.0.70
# via torch
nvidia-cufft-cu12==11.0.2.54
# via torch
nvidia-curand-cu12==10.3.2.106
# via torch
nvidia-cusolver-cu12==11.4.5.107
# via torch
nvidia-cusparse-cu12==12.1.0.106
# via
# nvidia-cusolver-cu12
# torch
nvidia-nccl-cu12==2.20.5
# via torch
nvidia-nvjitlink-cu12==12.9.86
# via
# nvidia-cusolver-cu12
# nvidia-cusparse-cu12
nvidia-nvtx-cu12==12.1.105
# via torch
packaging==24.1 packaging==24.1
# via # via
# huggingface-hub # huggingface-hub
...@@ -109,14 +145,21 @@ tqdm==4.66.5 ...@@ -109,14 +145,21 @@ tqdm==4.66.5
# transformers # transformers
transformers==4.46.1 transformers==4.46.1
# via -r requirements.in # via -r requirements.in
triton==3.0.0
# via torch
typing-extensions==4.12.2 typing-extensions==4.12.2
# via # via
# anyio
# exceptiongroup
# fastapi # fastapi
# huggingface-hub # huggingface-hub
# multidict
# pydantic # pydantic
# pydantic-core # pydantic-core
# starlette
# torch # torch
urllib3==2.2.3 # uvicorn
urllib3==2.5.0
# via requests # via requests
uvicorn==0.32.0 uvicorn==0.32.0
# via -r requirements.in # via -r requirements.in
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment