Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M ModelZoo
  • Group information
    • Group information
    • Activity
    • Labels
    • Members
  • Issues 41
    • Issues 41
    • List
    • Board
    • Milestones
  • Merge requests 6
    • Merge requests 6
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Dependency Proxy
  • Collapse sidebar
Collapse sidebar
  • ModelZoo
  • Issues

  • Open 41
  • Closed 135
  • All 176
  • Priority Created date Last updated Milestone due date Due date Popularity Label priority Manual
  • 找不到tensile,
    Deepseek-r1_ollama#2 · created Mar 04, 2025 by ychan
    • 0
    updated Mar 04, 2025
  • torchrun后没反应,也不报错
    deepseek-r1_pytorch#4 · created Feb 24, 2025 by ychan
    • 3
    updated Mar 04, 2025
  • 运行vllm serve报错
    deepseek-r1-distill_vllm#2 · created Feb 18, 2025 by azmat
    • 5
    updated Mar 09, 2025
  • 按照教程下载了镜像安装了pip包,提示RuntimeError: No HIP GPUs are available
    deepseek-r1-distill_vllm#1 · created Feb 12, 2025 by calvin11
    • 1
    updated Feb 12, 2025
  • 请问,在海光1号DCU上面可以部署吗?海光1号DCU显存是16GB的,不知道部署deepseek-R1对显存有没有要求?部署成功案例的加速卡配置可以分享一下吗?
    deepseek-r1_pytorch#3 · created Feb 11, 2025 by wangxh
    • 1
    updated Feb 12, 2025
  • 多轮对话后首字时延下降异常
    Deepseek-r1_ollama#1 · created Feb 10, 2025 by xurui
    • 0
    updated Feb 10, 2025
  • 请问,/path/to/fp8_weights,是什么意思,怎么获取呀
    deepseek-r1_pytorch#2 · created Feb 07, 2025 by wangxh
    • 5
    updated Feb 08, 2025
  • 请问下/path/to/fp8_weights/model.safetensors.index.json这个文件在哪
    deepseek-r1_pytorch#1 · created Feb 05, 2025 by ahsqxt_021
    • 1
    updated Feb 08, 2025
  • Error in inference. Audio output with silence
    f5-tts_pytorch#1 · created Nov 11, 2024 by chenzk
    • 2
    updated Jul 02, 2025
  • 多卡微调Qwen72B
    llama-factory-llama3.2_pytorch#1 · created Nov 08, 2024 by yuanfei
    • 2
    updated Nov 12, 2024
  • 多机训练报错如下
    resnet50_tensorflow#2 · created Nov 06, 2024 by wangkx1
    • 2
    updated Nov 06, 2024
  • vllm多卡推理的时候报错
    llama_vllm#6 · created Sep 11, 2024 by binbin2024
    • 1
    updated Oct 16, 2024
  • 无法启动多卡
    vit_pytorch#2 · created Aug 27, 2024 by liuyt2
    • 1
    updated Aug 28, 2024
  • 在DCU2的环境上进行训练报错提示显存不够
    qwen-vl_pytorch#1 · created Aug 01, 2024 by xiaxsh
    • 1
    updated Aug 06, 2024
  • core dump issue
    llama_lmdeploy#2 · created Jun 26, 2024 by yaotong1
    • 0
    updated Jun 26, 2024
  • lmdeploy能否适配最新版本,为了支持推理llama3
    qwen_lmdeploy#7 · created Jun 21, 2024 by ncic_liuyao
    • 1
    updated Aug 06, 2024
  • lmdeploy0.0.13 error RuntimeError: [TM][ERROR] Assertion fail
    llama_lmdeploy#1 · created Jun 17, 2024 by ncic_liuyao
    • 0
    updated Jun 17, 2024
  • 本工程的C++版编译环境需要如何配置?
    yolov7_migraphx#1 · created Jun 07, 2024 by qirui4046
    • 1
    updated Jun 18, 2024
  • Z100进行单节点推理报OOM如何解决
    qwen1.5_vllm#1 · created Jun 03, 2024 by JayFu
    • 4
    updated Jun 25, 2024
  • stage3_gather_16bit_weights_on_model_save为true保存模型会爆内存,有解决方案么
    qwen-torch#2 · created May 17, 2024 by xdchenhao
    • 3
    updated Aug 06, 2024
  • Prev
  • 1
  • 2
  • 3
  • Next