Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M ModelZoo
  • Group information
    • Group information
    • Activity
    • Labels
    • Members
  • Issues 41
    • Issues 41
    • List
    • Board
    • Milestones
  • Merge requests 6
    • Merge requests 6
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Dependency Proxy
  • Collapse sidebar
Collapse sidebar
  • ModelZoo
  • Issues

  • Open 41
  • Closed 135
  • All 176
  • Priority Created date Last updated Milestone due date Due date Popularity Label priority Manual
  • 编译报错
    llama_fastertransformer#7 · created May 14, 2024 by xurui
    • CLOSED
    • 2
    updated Nov 05, 2024
  • K100AI用24.04.1镜像运行qwen2-72b无法启动
    qwen1.5_vllm#3 · created Jul 02, 2024 by jixx
    • CLOSED
    • 1
    updated Nov 05, 2024
  • vllm能否做个webserver的推理例子
    llama_vllm#1 · created Jun 21, 2024 by ncic_liuyao
    • CLOSED
    • 3
    updated Oct 21, 2024
  • Qwen2-72B-Instruct-GPTQ-Int4测试benchmark_throughput,在固定输入输出长度下,随着num-prompts增加没有线性能力
    qwen1.5_vllm#4 · created Aug 01, 2024 by xurui
    • CLOSED
    • 2
    updated Oct 16, 2024
  • K100上报错CUDA error: HIPBLAS_STATUS_NOT_SUPPORTED
    qwen1.5_vllm#2 · created Jun 14, 2024 by xurui
    • CLOSED
    • 1
    updated Oct 16, 2024
  • glm4 9b 推理都是感叹号!!!
    chatglm_vllm#1 · created Jul 10, 2024 by quyuanhao123
    • CLOSED
    • 0
    updated Oct 16, 2024
  • 使用 CodeLlama-7b-Instruct-hf 推理,生成文本为噪音
    llama_vllm#5 · created Jul 04, 2024 by acane
    • CLOSED
    • 2
    updated Oct 16, 2024
  • 使用llama-7b-chat推理测试,得到答案不合理
    llama_vllm#4 · created Jul 02, 2024 by eating1
    • CLOSED
    • 2
    updated Oct 16, 2024
  • vllm能否适配最新版本,为了能支持推理llama3
    llama_vllm#2 · created Jun 21, 2024 by ncic_liuyao
    • CLOSED
    • 2
    updated Oct 16, 2024
  • qwen2.5 7B跑8卡报错
    qwen2.5_pytorch#1 · created Oct 15, 2024 by gaoxuan
    • CLOSED
    • 1
    updated Oct 15, 2024
  • RuntimeError: [enforce fail at /data/jenkins_workspace/workspace/pytorch@4/third_party/gloo/gloo/transport/tcp/device.cc:210] ifa != nullptr. Unable to find interface for: [0.5.118.124]
    internlm_2.5_pytorch#2 · created Sep 25, 2024 by xurui
    • CLOSED
    • 1
    updated Oct 11, 2024
  • 该项目没有包含Internlm_2.5工程的代码?
    internlm_2.5_pytorch#1 · created Sep 24, 2024 by xurui
    • CLOSED
    • 0
    updated Sep 25, 2024
  • readme中数据集下载链接失效
    bert-pytorch#1 · created Sep 13, 2024 by xurui
    • CLOSED
    • 1
    updated Sep 23, 2024
  • Z100下报错unsupported conversion from f16 to f16之后报错Aborted (core dumped)
    stablediffusion_v2.1_pytorch#1 · created Aug 17, 2024 by zhangxp1
    • CLOSED
    • 4
    updated Sep 13, 2024
  • 报错ImportError: cannot import name 'is_torch_mlu_available' from 'transformers.utils' (/usr/local/lib/python3.10/site-packages/transformers/utils/__init__.py)
    llama2_lora_pytorch#1 · created Sep 12, 2024 by xurui
    • CLOSED
    • 2
    updated Sep 12, 2024
  • 希望benchmark增加对dtk23.10的K100卡的支持
    chatglm2-6b_fastllm#3 · created Mar 26, 2024 by youbo
    • CLOSED
    • 1
    updated Aug 06, 2024
  • 这个案例的yolov8采用的预处理好像并非是官方的letterbox?
    yolov8_migraphx#2 · created Apr 26, 2024 by wangkaixiong
    • CLOSED
    • 1
    updated Aug 06, 2024
  • 当我编译模型的option设置offload_copy为false时,在migraphx上,推理结果始终会拷贝会cpu一份;
    yolov8_migraphx#1 · created Apr 24, 2024 by wangkaixiong
    • CLOSED
    • 2
    updated Aug 06, 2024
  • 下载的模型为.safetensors文件,没有ckpt文件。
    gemma_pytorch#1 · created Mar 21, 2024 by xurui
    • CLOSED
    • 1
    updated Aug 06, 2024
  • 希望能够补充一下inference或evaluation的步骤
    qwen-torch#1 · created Oct 07, 2023 by Sugon_ldc
    • CLOSED
    • 3
    updated Aug 06, 2024
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next