Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
suily
VTimeLLM_pytorch
Commits
3c99f54e
Commit
3c99f54e
authored
Nov 21, 2024
by
suily
Browse files
修改README
parent
dc88ba58
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
35 additions
and
34 deletions
+35
-34
README.md
README.md
+35
-34
No files found.
README.md
View file @
3c99f54e
...
@@ -24,40 +24,6 @@ Vicuna:即LLM,用<video>来代表视频内容,将视觉特征Z嵌入到tex
...
@@ -24,40 +24,6 @@ Vicuna:即LLM,用<video>来代表视频内容,将视觉特征Z嵌入到tex
<img
src=
"./doc/VTimeLLM.PNG"
/>
<img
src=
"./doc/VTimeLLM.PNG"
/>
</div>
</div>
## 代码改动说明
deepspeed0.14.2只能使用transformers4.31.0,此版本不支持k100ai使用bf16/tf32精度,修改transformers代码,避免精度错误:
ps:仓库中是改动后的代码,不需再次修改
```
pip show pip #查看依赖库安装地址site-packages
1、site-packages/transformers/utils/import_utils.py,修改def is_torch_bf16_gpu_available():
...
#TODO:if torch.cuda.is_available() and torch.version.cuda is not None:
if torch.cuda.is_available():
if torch.cuda.get_device_properties(torch.cuda.current_device()).major < 8:
return False
# if int(torch.version.cuda.split(".")[0]) < 11:
# return False
if not hasattr(torch.cuda.amp, "autocast"):
return False
else:
return False
2、site-packages/transformers/utils/import_utils.py,修改def is_torch_tf32_available():
...
#TODO:if not torch.cuda.is_available() or torch.version.cuda is None:
if not torch.cuda.is_available():
return False
if torch.cuda.get_device_properties(torch.cuda.current_device()).major < 8:
return False
# if int(torch.version.cuda.split(".")[0]) < 11:
# return False
if version.parse(version.parse(torch.__version__).base_version) < version.parse("1.7"):
return False
return True
```
## 环境配置
## 环境配置
### Docker(方法一)
### Docker(方法一)
```
```
...
@@ -110,6 +76,41 @@ export HF_ENDPOINT=https://hf-mirror.com
...
@@ -110,6 +76,41 @@ export HF_ENDPOINT=https://hf-mirror.com
apt update
apt update
apt install gcc libaio-dev
apt install gcc libaio-dev
```
```
## 代码改动说明
deepspeed0.14.2只能使用transformers4.31.0,此版本不支持k100ai使用bf16/tf32精度,修改transformers代码,避免精度错误:
ps:仓库中是改动后的代码,不需再次修改
```
pip show pip #查看依赖库安装地址site-packages
1、site-packages/transformers/utils/import_utils.py,修改def is_torch_bf16_gpu_available():
...
#TODO:if torch.cuda.is_available() and torch.version.cuda is not None:
if torch.cuda.is_available():
if torch.cuda.get_device_properties(torch.cuda.current_device()).major < 8:
return False
# if int(torch.version.cuda.split(".")[0]) < 11:
# return False
if not hasattr(torch.cuda.amp, "autocast"):
return False
else:
return False
2、site-packages/transformers/utils/import_utils.py,修改def is_torch_tf32_available():
...
#TODO:if not torch.cuda.is_available() or torch.version.cuda is None:
if not torch.cuda.is_available():
return False
if torch.cuda.get_device_properties(torch.cuda.current_device()).major < 8:
return False
# if int(torch.version.cuda.split(".")[0]) < 11:
# return False
if version.parse(version.parse(torch.__version__).base_version) < version.parse("1.7"):
return False
return True
```
## 数据集
## 数据集
### 训练数据集
### 训练数据集
VTimeLLM可基于Vicuna v1.5训练英文版本、基于ChatGLM3-6b训练中文版本,训练某个版本时只下载对应的数据集(data不同)即可。
VTimeLLM可基于Vicuna v1.5训练英文版本、基于ChatGLM3-6b训练中文版本,训练某个版本时只下载对应的数据集(data不同)即可。
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment