Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M ModelZoo
  • Group information
    • Group information
    • Activity
    • Labels
    • Members
  • Issues 42
    • Issues 42
    • List
    • Board
    • Milestones
  • Merge requests 6
    • Merge requests 6
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Dependency Proxy
  • Collapse sidebar
Collapse sidebar
  • ModelZoo
  • Issues

  • Open 42
  • Closed 142
  • All 184
  • Priority Created date Last updated Milestone due date Due date Popularity Label priority Manual
  • 模型文件未保存在./work_dirs/job/job_id/路径下等其他问题
    llama3_pytorch#2 · created May 20, 2024 by JayFu
    • CLOSED
    • 5
    updated May 30, 2024
  • stage3_gather_16bit_weights_on_model_save为true保存模型会爆内存,有解决方案么
    qwen-torch#2 · created May 17, 2024 by xdchenhao
    • 3
    updated Aug 06, 2024
  • 编译报错
    llama_fastertransformer#7 · created May 14, 2024 by xurui
    • CLOSED
    • 2
    updated Nov 05, 2024
  • 请教下qwen1.5-32B-chat预计什么时候可以支持fastllm转换,目前测试用已有的方法转换后,推理会有如下问题
    qwen-7b_fastllm#2 · created May 10, 2024 by eating1
    • 1
    updated Jun 13, 2024
  • 训练报错
    qwen1.5-pytorch#1 · created May 09, 2024 by hongzhen
    • CLOSED
    • 1
    updated May 10, 2024
  • 多卡训练时显示horovod无效,但是拉取的镜像已经有horovod库,导致无法多卡训练。
    bladedisc_deepmd#1 · created May 06, 2024 by wenbozhao
    • CLOSED
    • 1
    updated Jun 13, 2024
  • 书写错误?
    llama3_pytorch#1 · created May 06, 2024 by liuxiaofeng
    • CLOSED
    • 1
    updated May 06, 2024
  • 什么时候可以支持Qwen1.5系列?llama3系列?有没有计划?
    qwen_lmdeploy#5 · created Apr 27, 2024 by ac31lh9vwm
    • CLOSED
    • 1
    updated May 14, 2024
  • 这个案例的yolov8采用的预处理好像并非是官方的letterbox?
    yolov8_migraphx#2 · created Apr 26, 2024 by wangkaixiong
    • CLOSED
    • 1
    updated Aug 06, 2024
  • deepmd分布式训练
    qwen_lmdeploy#4 · created Apr 25, 2024 by c15468073
    • CLOSED
    • 1
    updated May 14, 2024
  • 当我编译模型的option设置offload_copy为false时,在migraphx上,推理结果始终会拷贝会cpu一份;
    yolov8_migraphx#1 · created Apr 24, 2024 by wangkaixiong
    • CLOSED
    • 2
    updated Aug 06, 2024
  • ChatGLM3-6b_pytorch的composite_demo,Code Interpreter无法显示图片
    chatglm3-6b_pytorch#1 · created Apr 08, 2024 by ached13n1s
    • CLOSED
    • 2
    updated May 18, 2024
  • 希望benchmark增加对dtk23.10的K100卡的支持
    chatglm2-6b_fastllm#3 · created Mar 26, 2024 by youbo
    • CLOSED
    • 1
    updated Aug 06, 2024
  • 有没有搭建好的项目,我想测试一下我的优化效果
    hat_pytorch#1 · created Mar 26, 2024 by msheshuaiming
    • CLOSED
    • 1
    updated Apr 25, 2024
  • 如何在超算平台上搭建此项目
    latte_pytorch#1 · created Mar 24, 2024 by heshuaiming
    • CLOSED
    • 2
    updated Apr 01, 2024
  • 下载的模型为.safetensors文件,没有ckpt文件。
    gemma_pytorch#1 · created Mar 21, 2024 by xurui
    • CLOSED
    • 1
    updated Aug 06, 2024
  • 使用 kv_cache int8量化后的模型推理出现异常打印,不断刷新屏幕
    qwen_lmdeploy#3 · created Mar 13, 2024 by wangkaixiong
    • CLOSED
    • 3
    updated May 14, 2024
  • 运行报错RuntimeError
    bert_large_squad_onnxruntime#2 · created Mar 06, 2024 by tianlh
    • CLOSED
    • 2
    updated Mar 12, 2024
  • 链接失效
    llama_fastchat_pytorch#2 · created Mar 02, 2024 by panpy
    • CLOSED
    • 1
    updated Apr 03, 2024
  • rocm-smi 的温度显示和文档之中的文档有出入
    baichuan_lmdeploy#1 · created Feb 27, 2024 by wangkaixiong
    • CLOSED
    • 0
    updated Feb 27, 2024
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next