Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
b2986d7a
Unverified
Commit
b2986d7a
authored
Dec 04, 2024
by
HAI
Committed by
GitHub
Dec 04, 2024
Browse files
Adding SGLang FP8 Utils (#2348)
parent
f8b03269
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
27 additions
and
0 deletions
+27
-0
python/sglang/srt/layers/quantization/fp8_utils.py
python/sglang/srt/layers/quantization/fp8_utils.py
+27
-0
No files found.
python/sglang/srt/layers/quantization/fp8_utils.py
0 → 100644
View file @
b2986d7a
from
typing
import
Optional
,
Tuple
import
torch
def
normalize_e4m3fn_to_e4m3fnuz
(
weight
:
torch
.
Tensor
,
weight_scale
:
torch
.
Tensor
,
input_scale
:
Optional
[
torch
.
Tensor
]
=
None
,
)
->
Tuple
[
torch
.
Tensor
,
torch
.
Tensor
,
Optional
[
torch
.
Tensor
]]:
assert
weight
.
dtype
==
torch
.
float8_e4m3fn
# The bits pattern 10000000(-128) represents zero in e4m3fn
# but NaN in e4m3fnuz. So here we set it to 0.
# https://onnx.ai/onnx/technical/float8.html
weight_as_int8
=
weight
.
view
(
torch
.
int8
)
ROCM_FP8_NAN_AS_INT
=
-
128
weight_as_int8
[
weight_as_int8
==
ROCM_FP8_NAN_AS_INT
]
=
0
weight
=
weight_as_int8
.
view
(
torch
.
float8_e4m3fnuz
)
# For the same bits representation, e4m3fnuz value is half of
# the e4m3fn value, so we should double the scaling factor to
# get the same dequantized value.
# https://onnx.ai/onnx/technical/float8.html
weight_scale
=
weight_scale
*
2.0
if
input_scale
is
not
None
:
input_scale
=
input_scale
*
2.0
return
weight
,
weight_scale
,
input_scale
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment