Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
composable_kernel
Commits
cd0c1f57
Unverified
Commit
cd0c1f57
authored
Apr 19, 2023
by
turneram
Committed by
GitHub
Apr 19, 2023
Browse files
Merge branch 'develop' into migx-device-interface
parents
c72a0d3e
bb0b772d
Changes
122
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
229 additions
and
287 deletions
+229
-287
docs/source/Linux_Install_Guide.rst
docs/source/Linux_Install_Guide.rst
+0
-15
docs/source/Makefile
docs/source/Makefile
+0
-20
docs/source/conf.py
docs/source/conf.py
+0
-219
docs/source/index.rst
docs/source/index.rst
+0
-16
docs/source/rocm_logo.png
docs/source/rocm_logo.png
+0
-0
docs/tutorial_hello_world.rst
docs/tutorial_hello_world.rst
+0
-0
example/30_grouped_conv_fwd_multiple_d/grouped_conv_fwd_xdl_fp16.cpp
...grouped_conv_fwd_multiple_d/grouped_conv_fwd_xdl_fp16.cpp
+1
-1
example/30_grouped_conv_fwd_multiple_d/run_grouped_conv_fwd_bias_relu_add_wmma_example.inc
...ple_d/run_grouped_conv_fwd_bias_relu_add_wmma_example.inc
+2
-2
example/40_conv2d_fwd_quantization/CMakeLists.txt
example/40_conv2d_fwd_quantization/CMakeLists.txt
+5
-0
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_relu_perchannel_quantization_int8.cpp
.../conv2d_fwd_dl_bias_relu_perchannel_quantization_int8.cpp
+6
-2
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_relu_perlayer_quantization_int8.cpp
...on/conv2d_fwd_dl_bias_relu_perlayer_quantization_int8.cpp
+7
-2
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_tanh_perchannel_quantization_int8.cpp
.../conv2d_fwd_dl_bias_tanh_perchannel_quantization_int8.cpp
+87
-0
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_tanh_perlayer_quantization_int8.cpp
...on/conv2d_fwd_dl_bias_tanh_perlayer_quantization_int8.cpp
+85
-0
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_perchannel_quantization_int8.cpp
...antization/conv2d_fwd_dl_perchannel_quantization_int8.cpp
+5
-1
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_perlayer_quantization_int8.cpp
...quantization/conv2d_fwd_dl_perlayer_quantization_int8.cpp
+6
-1
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_bias_relu_perchannel_quantization_int8.cpp
...conv2d_fwd_xdl_bias_relu_perchannel_quantization_int8.cpp
+6
-2
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_bias_relu_perlayer_quantization_int8.cpp
...n/conv2d_fwd_xdl_bias_relu_perlayer_quantization_int8.cpp
+7
-2
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_perchannel_quantization_int8.cpp
...ntization/conv2d_fwd_xdl_perchannel_quantization_int8.cpp
+5
-1
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_perlayer_quantization_int8.cpp
...uantization/conv2d_fwd_xdl_perlayer_quantization_int8.cpp
+6
-1
example/40_conv2d_fwd_quantization/run_conv2d_fwd_bias_perchannel_quantization_example.inc
...n/run_conv2d_fwd_bias_perchannel_quantization_example.inc
+1
-2
No files found.
docs/source/Linux_Install_Guide.rst
deleted
100644 → 0
View file @
c72a0d3e
=====================
Getting Started Guide
=====================
------------
Introduction
------------
This document contains instructions for installing, using, and contributing to Composable Kernel (CK).
Documentation Roadmap
^^^^^^^^^^^^^^^^^^^^^
The following is a list of CK documents in the suggested reading order:
[TODO]
\ No newline at end of file
docs/source/Makefile
deleted
100644 → 0
View file @
c72a0d3e
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS
=
SPHINXBUILD
=
sphinx-build
SPHINXPROJ
=
CK
SOURCEDIR
=
.
BUILDDIR
=
_build
# Put it first so that "make" without argument is like "make help".
help
:
@
$(SPHINXBUILD)
-M
help
"
$(SOURCEDIR)
"
"
$(BUILDDIR)
"
$(SPHINXOPTS)
$(O)
.PHONY
:
help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%
:
Makefile
@
$(SPHINXBUILD)
-M
$@
"
$(SOURCEDIR)
"
"
$(BUILDDIR)
"
$(SPHINXOPTS)
$(O)
docs/source/conf.py
deleted
100644 → 0
View file @
c72a0d3e
"""Copyright (C) 2018-2023 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell cop-
ies of the Software, and to permit persons to whom the Software is furnished
to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IM-
PLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNE-
CTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
# -*- coding: utf-8 -*-
#
# Composable Kernel (CK) docuumentation build configuration file, based on
# rocBLAS documentation build configuration file, created by
# sphinx-quickstart on Mon Jan 8 16:34:42 2018.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import
os
import
sys
import
subprocess
read_the_docs_build
=
os
.
environ
.
get
(
'READTHEDOCS'
,
None
)
==
'True'
if
read_the_docs_build
:
subprocess
.
call
(
'../run_doxygen.sh'
)
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions
=
[
'sphinx.ext.mathjax'
,
'breathe'
,
'sphinxcontrib.bibtex'
]
breathe_projects
=
{
"CK"
:
"../docBin/xml"
}
breathe_default_project
=
"CK"
bibtex_bibfiles
=
[
'refs.bib'
]
# Add any paths that contain templates here, relative to this directory.
templates_path
=
[
'_templates'
]
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix
=
'.rst'
# The master toctree document.
master_doc
=
'index'
# General information about the project.
project
=
u
'Composable Kernel (CK)'
copyright
=
u
'2018-2023, Advanced Micro Devices'
author
=
u
'Advanced Micro Devices'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
#version = u'0.8'
# The full version, including alpha/beta/rc tags.
#release = u'0.8'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language
=
'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns
=
[
'_build'
,
'Thumbs.db'
,
'.DS_Store'
]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style
=
'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos
=
False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# html_theme = 'alabaster'
#if read_the_docs_build:
# html_theme = 'default'
#else:
import
sphinx_rtd_theme
html_theme
=
"sphinx_rtd_theme"
html_theme_path
=
[
sphinx_rtd_theme
.
get_html_theme_path
()]
html_logo
=
"rocm_logo.png"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options
=
{
'logo_only'
:
True
,
'display_version'
:
True
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
# html_sidebars = {
# '**': [
# 'relations.html', # needs 'show_related': True theme option to display
# 'searchbox.html',
# ]
# }
mathjax3_config
=
{
'tex'
:
{
'macros'
:
{
'diag'
:
'
\\
operatorname{diag}'
,
}
}
}
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename
=
'CKdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements
=
{
# The paper size ('letterpaper' or 'a4paper').
#
'papersize'
:
'letterpaper'
,
# The font size ('10pt', '11pt' or '12pt').
#
'pointsize'
:
'10pt'
,
# Additional stuff for the LaTeX preamble.
#
'preamble'
:
r
'''
\setcounter{tocdepth}{5}
\newcommand{\diag}{\operatorname{diag}}
'''
,
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents
=
[
(
master_doc
,
'CK.tex'
,
u
'Composabl Kernel (CK) Documentation'
,
u
'Advanced Micro Devices'
,
'manual'
),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages
=
[
(
master_doc
,
'ck'
,
u
'Composable Kernel (CK) Documentation'
,
[
author
],
1
)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents
=
[
(
master_doc
,
'CK'
,
u
'Composable Kernel (CK) Documentation'
,
author
,
'CK'
,
'Composable Kernel for AMD ROCm'
,
'Miscellaneous'
),
]
docs/source/index.rst
deleted
100644 → 0
View file @
c72a0d3e
============================
Composable Kernel User Guide
============================
.. toctree::
:maxdepth: 5
:caption: Contents:
:numbered:
Linux_Install_Guide
tutorial_hello_world
dockerhub
Supported_Primitives_Guide
API_Reference_Guide
Contributors_Guide
Disclaimer
\ No newline at end of file
docs/source/rocm_logo.png
deleted
100644 → 0
View file @
c72a0d3e
347 KB
docs/
source/
tutorial_hello_world.rst
→
docs/tutorial_hello_world.rst
View file @
cd0c1f57
File moved
example/30_grouped_conv_fwd_multiple_d/grouped_conv_fwd_xdl_fp16.cpp
View file @
cd0c1f57
// SPDX-License-Identifier: MIT
// Copyright (c) 2018-202
2
, Advanced Micro Devices, Inc. All rights reserved.
// Copyright (c) 2018-202
3
, Advanced Micro Devices, Inc. All rights reserved.
#include "common.hpp"
...
...
example/30_grouped_conv_fwd_multiple_d/run_grouped_conv_fwd_bias_relu_add_wmma_example.inc
View file @
cd0c1f57
...
...
@@ -74,8 +74,8 @@ using DeviceConvFwdInstance =
8
,
// BBlockTransferSrcScalarPerVector
8
,
// BBlockTransferDstScalarPerVector_BK1
true
,
// BBlockLdsExtraN
1
,
1
,
4
,
2
,
S
<
1
,
32
,
1
,
8
>
,
8
>
;
...
...
example/40_conv2d_fwd_quantization/CMakeLists.txt
View file @
cd0c1f57
...
...
@@ -14,3 +14,8 @@ add_example_executable(example_conv2d_fwd_xdl_bias_relu_perlayer_quantization_in
add_example_executable
(
example_conv2d_fwd_dl_bias_relu_perchannel_quantization_int8 conv2d_fwd_dl_bias_relu_perchannel_quantization_int8.cpp
)
add_example_executable
(
example_conv2d_fwd_xdl_bias_relu_perchannel_quantization_int8 conv2d_fwd_xdl_bias_relu_perchannel_quantization_int8.cpp
)
# Conv + bias + tanh perlayer quantization
add_example_executable
(
example_conv2d_fwd_dl_bias_tanh_perlayer_quantization_int8 conv2d_fwd_dl_bias_tanh_perlayer_quantization_int8.cpp
)
# Conv + bias + tanh perchannel quantization
add_example_executable
(
example_conv2d_fwd_dl_bias_tanh_perchannel_quantization_int8 conv2d_fwd_dl_bias_tanh_perchannel_quantization_int8.cpp
)
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_relu_perchannel_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -76,6 +76,10 @@ using DeviceGroupedConvNDFwdInstance =
5
,
// CThreadTransferSrcDstVectorDim
4
>
;
// CThreadTransferDstScalarPerVector
#include "run_conv2d_fwd_bias_
relu_
perchannel_quantization_example.inc"
#include "run_conv2d_fwd_bias_perchannel_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_bias_relu_perchannel_quantization_example
();
};
int
main
()
{
const
auto
out_element_op
=
OutElementOp
{
ActivationOp
{}};
run_conv2d_fwd_bias_perchannel_quantization_example
(
out_element_op
);
};
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_relu_perlayer_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -74,6 +74,11 @@ using DeviceGroupedConvNDFwdInstance =
5
,
// CThreadTransferSrcDstVectorDim
4
>
;
// CThreadTransferDstScalarPerVector
#include "run_conv2d_fwd_bias_
relu_
perlayer_quantization_example.inc"
#include "run_conv2d_fwd_bias_perlayer_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_bias_relu_perlayer_quantization_example
();
}
int
main
()
{
float
requant_scale
=
0.5
f
;
const
auto
out_element_op
=
OutElementOp
{
requant_scale
,
ActivationOp
{}};
run_conv2d_fwd_bias_perlayer_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_tanh_perchannel_quantization_int8.cpp
0 → 100644
View file @
cd0c1f57
// SPDX-License-Identifier: MIT
// Copyright (c) 2018-2022, Advanced Micro Devices, Inc. All rights reserved.
#include "common.hpp"
#include "ck/tensor_operation/gpu/device/device_grouped_conv_fwd_dl_multiple_d_nhwc_kyxc_nhwk.hpp"
using
InDataType
=
int8_t
;
using
WeiDataType
=
int8_t
;
using
BiasDataType
=
int32_t
;
using
RequantScaleDataType
=
float
;
using
AccDataType
=
int32_t
;
using
OutDataType
=
int8_t
;
template
<
ck
::
index_t
...
Is
>
using
S
=
ck
::
Sequence
<
Is
...
>
;
using
PassThrough
=
ck
::
tensor_operation
::
element_wise
::
PassThrough
;
using
InElementOp
=
PassThrough
;
using
WeiElementOp
=
PassThrough
;
using
ActivationOp
=
ck
::
tensor_operation
::
element_wise
::
TanH
;
using
OutElementOp
=
ck
::
tensor_operation
::
element_wise
::
Add_Mul2_Activation_Mul_Clamp
<
ActivationOp
>
;
static
constexpr
auto
ConvSpec
=
ck
::
tensor_operation
::
device
::
ConvolutionForwardSpecialization
::
Default
;
static
constexpr
auto
GemmSpec
=
ck
::
tensor_operation
::
device
::
GemmSpecialization
::
MNKPadding
;
template
<
ck
::
index_t
NDimSpatial
,
typename
InLayout
,
typename
WeiLayout
,
typename
BiasLayout
,
typename
RequantScaleLayout
,
typename
OutLayout
>
using
DeviceGroupedConvNDFwdInstance
=
ck
::
tensor_operation
::
device
::
DeviceGroupedConvFwdDlMultipleD_NHWC_KYXC_NHWK
<
NDimSpatial
,
InDataType
,
WeiDataType
,
ck
::
Tuple
<
BiasDataType
,
RequantScaleDataType
>
,
OutDataType
,
AccDataType
,
InLayout
,
WeiLayout
,
ck
::
Tuple
<
BiasLayout
,
RequantScaleLayout
>
,
OutLayout
,
InElementOp
,
WeiElementOp
,
OutElementOp
,
ConvSpec
,
// ConvForwardSpecialization
GemmSpec
,
// GemmSpecialization
256
,
// BlockSize
128
,
// MPerBlock
128
,
// NPerBlock
16
,
// K0PerBlock
4
,
// K1
4
,
// M1PerThread
4
,
// N1PerThread
1
,
// KPerThread
S
<
8
,
2
>
,
// M1N1ThreadClusterM1Xs
S
<
8
,
2
>
,
// M1N1ThreadClusterN1Xs
S
<
8
,
1
,
1
,
4
>
,
// ABlockTransferThreadSliceLengths_K0_M0_M1_K1
S
<
2
,
1
,
128
,
1
>
,
// ABlockTransferThreadClusterLengths_K0_M0_M1_K1
S
<
1
,
2
,
0
,
3
>
,
// ABlockTransferThreadClusterArrangeOrder
S
<
1
,
2
,
0
,
3
>
,
// ABlockTransferSrcAccessOrder
S
<
4
,
1
,
1
,
4
>
,
// ABlockTransferSrcVectorTensorLengths_K0_M0_M1_K1
S
<
1
,
2
,
0
,
3
>
,
// ABlockTransferSrcVectorTensorContiguousDimOrder
S
<
1
,
1
,
1
,
4
>
,
// ABlockTransferDstVectorTensorLengths_K0_M0_M1_K1
S
<
8
,
1
,
1
,
4
>
,
// BBlockTransferThreadSliceLengths_K0_N0_N1_K1
S
<
2
,
1
,
128
,
1
>
,
// BBlockTransferThreadClusterLengths_K0_N0_N1_K1
S
<
1
,
2
,
0
,
3
>
,
// BBlockTransferThreadClusterArrangeOrder
S
<
1
,
2
,
0
,
3
>
,
// BBlockTransferSrcAccessOrder
S
<
4
,
1
,
1
,
4
>
,
// BBlockTransferSrcVectorTensorLengths_K0_N0_N1_K1
S
<
1
,
2
,
0
,
3
>
,
// BBlockTransferSrcVectorTensorContiguousDimOrder
S
<
1
,
1
,
1
,
4
>
,
// BBlockTransferDstVectorTensorLengths_K0_N0_N1_K1
S
<
0
,
1
,
2
,
3
,
4
,
5
>
,
// CThreadTransferSrcDstAccessOrder
5
,
// CThreadTransferSrcDstVectorDim
4
>
;
// CThreadTransferDstScalarPerVector
#include "run_conv2d_fwd_bias_perchannel_quantization_example.inc"
int
main
()
{
float
scale_z_inv
=
0.5
f
;
const
auto
out_element_op
=
OutElementOp
{
scale_z_inv
,
ActivationOp
{}};
run_conv2d_fwd_bias_perchannel_quantization_example
(
out_element_op
);
};
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_bias_tanh_perlayer_quantization_int8.cpp
0 → 100644
View file @
cd0c1f57
// SPDX-License-Identifier: MIT
// Copyright (c) 2018-2022, Advanced Micro Devices, Inc. All rights reserved.
#include "common.hpp"
#include "ck/tensor_operation/gpu/device/device_grouped_conv_fwd_dl_multiple_d_nhwc_kyxc_nhwk.hpp"
using
InDataType
=
int8_t
;
using
WeiDataType
=
int8_t
;
using
BiasDataType
=
int32_t
;
using
AccDataType
=
int32_t
;
using
OutDataType
=
int8_t
;
template
<
ck
::
index_t
...
Is
>
using
S
=
ck
::
Sequence
<
Is
...
>
;
using
PassThrough
=
ck
::
tensor_operation
::
element_wise
::
PassThrough
;
using
InElementOp
=
PassThrough
;
using
WeiElementOp
=
PassThrough
;
using
ActivationOp
=
ck
::
tensor_operation
::
element_wise
::
TanH
;
using
OutElementOp
=
ck
::
tensor_operation
::
element_wise
::
Add_Mul_Activation_Mul_Clamp
<
ActivationOp
>
;
static
constexpr
auto
ConvSpec
=
ck
::
tensor_operation
::
device
::
ConvolutionForwardSpecialization
::
Default
;
static
constexpr
auto
GemmSpec
=
ck
::
tensor_operation
::
device
::
GemmSpecialization
::
MNKPadding
;
template
<
ck
::
index_t
NDimSpatial
,
typename
InLayout
,
typename
WeiLayout
,
typename
BiasLayout
,
typename
OutLayout
>
using
DeviceGroupedConvNDFwdInstance
=
ck
::
tensor_operation
::
device
::
DeviceGroupedConvFwdDlMultipleD_NHWC_KYXC_NHWK
<
NDimSpatial
,
InDataType
,
WeiDataType
,
ck
::
Tuple
<
BiasDataType
>
,
OutDataType
,
AccDataType
,
InLayout
,
WeiLayout
,
ck
::
Tuple
<
BiasLayout
>
,
OutLayout
,
InElementOp
,
WeiElementOp
,
OutElementOp
,
ConvSpec
,
// ConvForwardSpecialization
GemmSpec
,
// GemmSpecialization
256
,
// BlockSize
128
,
// MPerBlock
128
,
// NPerBlock
16
,
// K0PerBlock
4
,
// K1
4
,
// M1PerThread
4
,
// N1PerThread
1
,
// KPerThread
S
<
8
,
2
>
,
// M1N1ThreadClusterM1Xs
S
<
8
,
2
>
,
// M1N1ThreadClusterN1Xs
S
<
8
,
1
,
1
,
4
>
,
// ABlockTransferThreadSliceLengths_K0_M0_M1_K1
S
<
2
,
1
,
128
,
1
>
,
// ABlockTransferThreadClusterLengths_K0_M0_M1_K1
S
<
1
,
2
,
0
,
3
>
,
// ABlockTransferThreadClusterArrangeOrder
S
<
1
,
2
,
0
,
3
>
,
// ABlockTransferSrcAccessOrder
S
<
4
,
1
,
1
,
4
>
,
// ABlockTransferSrcVectorTensorLengths_K0_M0_M1_K1
S
<
1
,
2
,
0
,
3
>
,
// ABlockTransferSrcVectorTensorContiguousDimOrder
S
<
1
,
1
,
1
,
4
>
,
// ABlockTransferDstVectorTensorLengths_K0_M0_M1_K1
S
<
8
,
1
,
1
,
4
>
,
// BBlockTransferThreadSliceLengths_K0_N0_N1_K1
S
<
2
,
1
,
128
,
1
>
,
// BBlockTransferThreadClusterLengths_K0_N0_N1_K1
S
<
1
,
2
,
0
,
3
>
,
// BBlockTransferThreadClusterArrangeOrder
S
<
1
,
2
,
0
,
3
>
,
// BBlockTransferSrcAccessOrder
S
<
4
,
1
,
1
,
4
>
,
// BBlockTransferSrcVectorTensorLengths_K0_N0_N1_K1
S
<
1
,
2
,
0
,
3
>
,
// BBlockTransferSrcVectorTensorContiguousDimOrder
S
<
1
,
1
,
1
,
4
>
,
// BBlockTransferDstVectorTensorLengths_K0_N0_N1_K1
S
<
0
,
1
,
2
,
3
,
4
,
5
>
,
// CThreadTransferSrcDstAccessOrder
5
,
// CThreadTransferSrcDstVectorDim
4
>
;
// CThreadTransferDstScalarPerVector
#include "run_conv2d_fwd_bias_perlayer_quantization_example.inc"
int
main
()
{
float
scale_acc
=
0.5
f
;
float
scale_z_inv
=
0.5
f
;
const
auto
out_element_op
=
OutElementOp
{
scale_z_inv
,
scale_acc
,
ActivationOp
{}};
run_conv2d_fwd_bias_perlayer_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_perchannel_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -76,4 +76,8 @@ using DeviceGroupedConvNDFwdInstance =
#include "run_conv2d_fwd_perchannel_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_perchannel_quantization_example
();
}
int
main
()
{
const
auto
out_element_op
=
OutElementOp
{
ActivationOp
{}};
run_conv2d_fwd_perchannel_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/conv2d_fwd_dl_perlayer_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -71,4 +71,9 @@ using DeviceGroupedConvNDFwdInstance =
#include "run_conv2d_fwd_perlayer_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_perlayer_quantization_example
();
}
int
main
()
{
float
requant_scale
=
0.5
f
;
const
auto
out_element_op
=
OutElementOp
{
requant_scale
,
ActivationOp
{}};
run_conv2d_fwd_perlayer_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_bias_relu_perchannel_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -80,6 +80,10 @@ using DeviceGroupedConvNDFwdInstance =
S
<
1
,
64
,
1
,
4
>
,
8
>
;
#include "run_conv2d_fwd_bias_
relu_
perchannel_quantization_example.inc"
#include "run_conv2d_fwd_bias_perchannel_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_bias_relu_perchannel_quantization_example
();
};
int
main
()
{
const
auto
out_element_op
=
OutElementOp
{
ActivationOp
{}};
run_conv2d_fwd_bias_perchannel_quantization_example
(
out_element_op
);
};
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_bias_relu_perlayer_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -78,6 +78,11 @@ using DeviceGroupedConvNDFwdInstance =
S
<
1
,
64
,
1
,
4
>
,
8
>
;
#include "run_conv2d_fwd_bias_
relu_
perlayer_quantization_example.inc"
#include "run_conv2d_fwd_bias_perlayer_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_bias_relu_perlayer_quantization_example
();
}
int
main
()
{
float
requant_scale
=
0.5
f
;
const
auto
out_element_op
=
OutElementOp
{
requant_scale
,
ActivationOp
{}};
run_conv2d_fwd_bias_perlayer_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_perchannel_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -80,4 +80,8 @@ using DeviceGroupedConvNDFwdInstance =
#include "run_conv2d_fwd_perchannel_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_perchannel_quantization_example
();
}
int
main
()
{
const
auto
out_element_op
=
OutElementOp
{
ActivationOp
{}};
run_conv2d_fwd_perchannel_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/conv2d_fwd_xdl_perlayer_quantization_int8.cpp
View file @
cd0c1f57
...
...
@@ -75,4 +75,9 @@ using DeviceGroupedConvNDFwdInstance =
#include "run_conv2d_fwd_perlayer_quantization_example.inc"
int
main
()
{
run_conv2d_fwd_perlayer_quantization_example
();
}
int
main
()
{
float
requant_scale
=
0.5
f
;
const
auto
out_element_op
=
OutElementOp
{
requant_scale
,
ActivationOp
{}};
run_conv2d_fwd_perlayer_quantization_example
(
out_element_op
);
}
example/40_conv2d_fwd_quantization/run_conv2d_fwd_bias_
relu_
perchannel_quantization_example.inc
→
example/40_conv2d_fwd_quantization/run_conv2d_fwd_bias_perchannel_quantization_example.inc
View file @
cd0c1f57
...
...
@@ -167,7 +167,7 @@ bool run_grouped_conv_fwd(bool do_verification,
return
(
pass
?
0
:
1
);
}
int
run_conv2d_fwd_bias_
relu_
perchannel_quantization_example
()
int
run_conv2d_fwd_bias_perchannel_quantization_example
(
const
OutElementOp
&
out_element_op
)
{
bool
do_verification
=
true
;
bool
time_kernel
=
true
;
...
...
@@ -189,7 +189,6 @@ int run_conv2d_fwd_bias_relu_perchannel_quantization_example()
const
auto
in_element_op
=
InElementOp
{};
const
auto
wei_element_op
=
WeiElementOp
{};
const
auto
out_element_op
=
OutElementOp
{
ActivationOp
{}};
using
InLayout
=
ck
::
tensor_layout
::
convolution
::
GNHWC
;
using
WeiLayout
=
ck
::
tensor_layout
::
convolution
::
GKYXC
;
...
...
Prev
1
2
3
4
5
6
7
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment