Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M ModelZoo
  • Group information
    • Group information
    • Activity
    • Labels
    • Members
  • Issues 42
    • Issues 42
    • List
    • Board
    • Milestones
  • Merge requests 6
    • Merge requests 6
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Dependency Proxy
  • Collapse sidebar
Collapse sidebar
  • ModelZoo
  • Issues

  • Open 42
  • Closed 142
  • All 184
  • Priority Created date Last updated Milestone due date Due date Popularity Label priority Manual
  • benchmark 代码没有随 vllm 版本更新
    qwen2.5-vllm#1 · created Aug 19, 2025 by liuxiaofeng
    • 0
    updated Aug 19, 2025
  • 解析pdf文件报错
    mineru_pytorch#1 · created Aug 15, 2025 by qkkcoolmax
    • 1
    updated Aug 15, 2025
  • 报错
    wan2.1_pytorch#1 · created Jul 30, 2025 by birdzzh
    • 0
    updated Jul 30, 2025
  • lmdeploy仓库链接不对
    codellama_lmdeploy#1 · created Jul 15, 2025 by chenzk
    • 1
    updated Jul 15, 2025
  • 使用Llama Factory 训练问题咨询
    glm-4_pytorch#3 · created Jul 10, 2025 by t5y6jjj
    • CLOSED
    • 1
    updated Jul 21, 2025
  • ERROR 07-10 10:30:49 [multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 2692 died, exit code: -15
    qwen3-reranker#1 · created Jul 10, 2025 by xiaxsh
    • CLOSED
    • 2
    updated Jul 21, 2025
  • 推理报错:RuntimeError: No HIP GPUs are available
    ssd_pytorch#1 · created Jun 04, 2025 by t5y6jjj
    • 0
    updated Jun 04, 2025
  • 可以支持vllm推理了吗?
    Qwen2.5-Omni_pytorch#1 · created May 15, 2025 by chenzk
    • 0
    updated May 15, 2025
  • vllm推理时Triton编译报错
    Qwen3_pytorch#2 · created May 02, 2025 by richar_
    • 1
    updated May 13, 2025
  • vllm推理可以了吗?
    Qwen3_pytorch#1 · created Apr 29, 2025 by chenzk
    • 3
    updated Jun 11, 2025
  • 推理报错segmentation fault(core dump)
    wenet_onnxruntime#1 · created Mar 20, 2025 by hehaijun
    • 0
    updated Mar 20, 2025
  • 请问QwQ-32B的量化版本什么时候可以使用呢?
    QwQ-32B_pytorch#2 · created Mar 19, 2025 by chenzk
    • 1
    updated Mar 19, 2025
  • failed to read sysfs node
    Deepseek-r1_ollama#4 · created Mar 17, 2025 by quyuanhao123
    • 1
    updated Mar 17, 2025
  • 部署后报错 POST predict: Post "http://127.0.0.1:41546/completion": EOF
    Deepseek-r1_ollama#3 · created Mar 14, 2025 by Eddiehza
    • 0
    updated Mar 14, 2025
  • QwQ-32B-AWQ模型什么时候能适配?
    QwQ-32B_pytorch#1 · created Mar 13, 2025 by jixx
    • 3
    updated Mar 19, 2025
  • 找不到tensile,
    Deepseek-r1_ollama#2 · created Mar 04, 2025 by ychan
    • 0
    updated Mar 04, 2025
  • torchrun后没反应,也不报错
    deepseek-r1_pytorch#4 · created Feb 24, 2025 by ychan
    • 3
    updated Mar 04, 2025
  • 运行vllm serve报错
    deepseek-r1-distill_vllm#2 · created Feb 18, 2025 by azmat
    • 5
    updated Mar 09, 2025
  • 按照教程下载了镜像安装了pip包,提示RuntimeError: No HIP GPUs are available
    deepseek-r1-distill_vllm#1 · created Feb 12, 2025 by calvin11
    • 1
    updated Feb 12, 2025
  • 请问,在海光1号DCU上面可以部署吗?海光1号DCU显存是16GB的,不知道部署deepseek-R1对显存有没有要求?部署成功案例的加速卡配置可以分享一下吗?
    deepseek-r1_pytorch#3 · created Feb 11, 2025 by wangxh
    • 1
    updated Feb 12, 2025
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 10
  • Next