Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
composable_kernel
Commits
c0be8480
Commit
c0be8480
authored
Mar 09, 2023
by
rocking
Browse files
Move ot subfolder. Prepare to add other op of quantization
parent
0a08477b
Changes
6
Show whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
4 additions
and
4 deletions
+4
-4
library/src/tensor_operation_instance/gpu/quantization/CMakeLists.txt
...tensor_operation_instance/gpu/quantization/CMakeLists.txt
+4
-4
library/src/tensor_operation_instance/gpu/quantization/conv2d_fwd/device_conv2d_xdl_bias_perchannel_quantization_int8_instance.cpp
...conv2d_xdl_bias_perchannel_quantization_int8_instance.cpp
+0
-0
library/src/tensor_operation_instance/gpu/quantization/conv2d_fwd/device_conv2d_xdl_bias_perlayer_quantization_int8_instance.cpp
...e_conv2d_xdl_bias_perlayer_quantization_int8_instance.cpp
+0
-0
library/src/tensor_operation_instance/gpu/quantization/conv2d_fwd/device_conv2d_xdl_int8_instance.hpp
...antization/conv2d_fwd/device_conv2d_xdl_int8_instance.hpp
+0
-0
library/src/tensor_operation_instance/gpu/quantization/conv2d_fwd/device_conv2d_xdl_perchannel_quantization_int8_instance.cpp
...vice_conv2d_xdl_perchannel_quantization_int8_instance.cpp
+0
-0
library/src/tensor_operation_instance/gpu/quantization/conv2d_fwd/device_conv2d_xdl_perlayer_quantization_int8_instance.cpp
...device_conv2d_xdl_perlayer_quantization_int8_instance.cpp
+0
-0
No files found.
library/src/tensor_operation_instance/gpu/quantization/CMakeLists.txt
View file @
c0be8480
add_instance_library
(
device_quantization_instance
add_instance_library
(
device_quantization_instance
device_conv2d_xdl_bias_perchannel_quantization_int8_instance.cpp
conv2d_fwd/
device_conv2d_xdl_bias_perchannel_quantization_int8_instance.cpp
device_conv2d_xdl_bias_perlayer_quantization_int8_instance.cpp
conv2d_fwd/
device_conv2d_xdl_bias_perlayer_quantization_int8_instance.cpp
device_conv2d_xdl_perchannel_quantization_int8_instance.cpp
conv2d_fwd/
device_conv2d_xdl_perchannel_quantization_int8_instance.cpp
device_conv2d_xdl_perlayer_quantization_int8_instance.cpp
conv2d_fwd/
device_conv2d_xdl_perlayer_quantization_int8_instance.cpp
)
)
library/src/tensor_operation_instance/gpu/quantization/device_conv2d_xdl_bias_perchannel_quantization_int8_instance.cpp
→
library/src/tensor_operation_instance/gpu/quantization/
conv2d_fwd/
device_conv2d_xdl_bias_perchannel_quantization_int8_instance.cpp
View file @
c0be8480
File moved
library/src/tensor_operation_instance/gpu/quantization/device_conv2d_xdl_bias_perlayer_quantization_int8_instance.cpp
→
library/src/tensor_operation_instance/gpu/quantization/
conv2d_fwd/
device_conv2d_xdl_bias_perlayer_quantization_int8_instance.cpp
View file @
c0be8480
File moved
library/src/tensor_operation_instance/gpu/quantization/device_conv2d_xdl_int8_instance.hpp
→
library/src/tensor_operation_instance/gpu/quantization/
conv2d_fwd/
device_conv2d_xdl_int8_instance.hpp
View file @
c0be8480
File moved
library/src/tensor_operation_instance/gpu/quantization/device_conv2d_xdl_perchannel_quantization_int8_instance.cpp
→
library/src/tensor_operation_instance/gpu/quantization/
conv2d_fwd/
device_conv2d_xdl_perchannel_quantization_int8_instance.cpp
View file @
c0be8480
File moved
library/src/tensor_operation_instance/gpu/quantization/device_conv2d_xdl_perlayer_quantization_int8_instance.cpp
→
library/src/tensor_operation_instance/gpu/quantization/
conv2d_fwd/
device_conv2d_xdl_perlayer_quantization_int8_instance.cpp
View file @
c0be8480
File moved
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment