Commit 421ecad6 authored by Alan Turner's avatar Alan Turner
Browse files

Merge remote-tracking branch 'origin/develop' into ck-gsg

parents 3d0426e9 7cf05301
# Workflows
## `add-to-project.yaml`
<p>
This workflow adds pull requests and issues to a specific GitHub project board when they are opened.
</p>
- ## Trigger
The workflow is triggered by the following events:
> - A pull request being opened.
> - An issue being opened.
- ## Jobs
The workflow has a single job named `add-to-project`. The following step is executed in this job:
> - The `add-to-project` job uses the `actions/add-to-project@v0.4.0` action to add pull requests and issues to a specific project board. The `with` parameters are `project-url` and `github-token`, which specify the URL of the project board and the GitHub token used to authenticate the action.
For more details, please refer to the [add-to-project.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/add-to-project.yaml) file in the repository.
---
## `benchmark.yaml`
<p>
This workflow runs the `MiGraphX performance benchmarks` and generates reports by comparing the results with the reference data.
</p>
- ## Trigger
TODO: Update [benchmarks.yml (archived)](https://github.com/ROCmSoftwarePlatform/actions/blob/main/.github/workflows/benchmarks.yml) link after workflow is updated
> The workflow is triggered manually through the "Run workflow" button in the Actions tab of the repository and it will run reusable workflow [benchmarks.yml (archived)](https://github.com/ROCmSoftwarePlatform/actions/blob/main/.github/workflows/benchmarks.yml)
- ## Input Parameters
The workflow uses the following input parameters:
> - `rocm_version`: the version of ROCm to use for running the benchmarks.
> - `script_repo`: repository that contains the benchmark scripts.
> - `result_path`: the path where benchmark results will be stored.
> - `result_repo`: the repository where the benchmark results will be pushed for comparison.
For more details, please refer to the [benchmark.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/benchmark.yaml) file in the repository.
---
## `ci.yaml`
<p>
Overall, this workflow automates the process of building and testing the AMDMIGraphX project across multiple platforms and versions.
</p>
- ## Trigger
The workflow is triggered by the following events:
> - A pull request being opened, synchronized or closed.
> - On push to the `develop`, `master`, and `release/**` branches.
- ## Jobs
The following jobs are executed in the workflow:
> - `cancel`: This job is responsible for canceling any previous runs of the workflow that may still be running. It runs on an `ubuntu-latest` runner and uses the `styfle/cancel-workflow-action` action to cancel any previous runs of the workflow.
> - `tidy`: It runs on an `ubuntu-20.04` runner and runs `clang-tidy` for the codebase in a Docker container with the MIGraphX build environment.
> - `cppcheck`: It runs on an `ubuntu-20.04` runner and performs static analysis on code in a Docker container, and caches the results for faster subsequent runs.
> - `format`: It runs on an `ubuntu-20.04` runner and includes steps for freeing up disk space, caching Docker layers, and checking code formatting.
> - `pyflakes`: It runs on an `ubuntu-20.04` runner and runs the Pyflakes static analysis tool to detect and report Python code issues.
> - `licensing`: It runs on an `ubuntu-20.04` runner and includes steps to free up space, checkout the code, set up Python and run a license check using a Python script.
---
We have 2 jobs with multiple matrix configurations, both of them are running on `ubuntu-20.04` runner but right now only `linux` works on all 3 configurations (debug, release, codecov) ,`linux-fpga` works just on (debug).
---
> - `linux`: this job runs continuous integration tests for AMDMIGraphX on a Linux operating system. It tests a variety of build configurations to ensure code quality and compatibility.
> - `linux-fpga`: this job builds and tests AMDMIGraphX on a Linux operating system with support for FPGA acceleration. It includes additional steps to verify FPGA functionality and performance.
For more details, please refer to the [ci.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/ci.yaml) file in the repository.
---
## `clean-closed-pr-caches.yaml`
<p>
This workflow has purpose to clean up any cached data related to the pull request.
</p>
- ## Trigger
The workflow is triggered by the following events:
> - A pull request being closed.
- ## Jobs
The workflow has a single job named `cleanup`. The following steps are executed in this job:
> - `Check out code`: step checks out the codebase from the repository.
> - `Cleanup`: step performs the actual cache cleanup using a series of commands.
For more details, please refer to the [clean-closed-pr-caches.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/clean-closed-pr-caches.yaml) file in the repository.
---
## `history.yaml`
<p>
This workflow generates a report of the MiGraphX benchmark results between two dates and sends it to a specified email address. The report is also uploaded to a specified repository.
</p>
- ## Trigger
> The workflow is triggered manually through the "Run workflow" button in the Actions tab of the repository and it will run reusable workflow [history.yml](https://github.com/ROCmSoftwarePlatform/migraphx-benchmark/blob/main/.github/workflows/history.yml)
- ## Input Parameters
The workflow requires the following inputs:
> - `start_date`: Start date for results analysis.
> - `end_date`: End date for results analysis.
> - `history_repo`: Repository for history results between dates.
> - `benchmark_utils_repo`: Repository where benchmark utils are stored.
> - `organization`: Organization based on which location of files will be different.
For more details, please refer to the [history.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/history.yaml) file in the repository.
---
## `performance.yaml`
<p>
This workflow runs performance tests on the MIGraphX repository and generates a report of the results.
</p>
- ## Trigger
The workflow will run reusable workflow [perf-test.yml](https://github.com/ROCmSoftwarePlatform/migraphx-benchmark/blob/main/.github/workflows/perf-test.yml) by the following events:
> - Pull requests opened, synchronized or closed on the `develop` branch.
> - Schedule: Runs every day of the week from Monday to Saturday at 6:00 AM.
> - Manual trigger through the "Run workflow" button in the Actions tab of the repository.
- ## Input Parameters
The workflow requires the following inputs:
> - `rocm_release`: ROCm version to use for the performance tests.
> - `performance_reports_repo`: Repository where the performance reports are stored.
> - `benchmark_utils_repo`: Repository where the benchmark utilities are stored.
> - `organization`: Organization based on which location of files will be different.
> - `result_number`: Last N results.
> - `model_timeout`: If a model in the performance test script passes this threshold, it will be skipped.
> - `flags`: Command line arguments to be passed to the performance test script. Default is `-r`.
For more details, please refer to the [performance.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/performance.yaml) file in the repository.
---
## `rocm-image-release.yaml`
<p>
This workflow builds a Docker image for a specified ROCm release version and pushes it to the specified repository. If image already exists nothing will happen, and there is also option to overwrite existing image.
</p>
- ## Trigger
> The workflow is triggered manually through the "Run workflow" button in the Actions tab of the repository and it will run reusable workflow [rocm-release.yml](https://github.com/ROCmSoftwarePlatform/migraphx-benchmark/blob/main/.github/workflows/rocm-release.yml)
- ## Input Parameters
The workflow requires the following inputs:
> - `rocm_release`: ROCm release version to build Docker image for.
> - `benchmark_utils_repo`: Repository where benchmark utils are stored.
> - `base_image`: Base image for ROCm Docker build.
> - `docker_image`: Docker image name for ROCm Docker build.
> - `build_navi`: Build number for the Navi architecture.
> - `organization`: The organization name used to determine the location of files.
> - `overwrite`: Specify whether to overwrite the Docker image if it already exists.
For more details, please refer to the [rocm-image-release.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/rocm-image-release.yaml) file in the repository.
---
## `sync-onnxrt-main.yaml`
<p>
This workflow updates a file with the latest commit hash then creates a pull request using the updated commit hash and adds labels, assignees, reviewers, and a title and body to describe the changes.
</p>
- ## Trigger
The workflow is triggered by the following events:
> - Schedule: Runs every week on Friday at 05:07 PM.
- ## Jobs
The workflow has a single job named `Update and create pull request`. The following steps are executed in this job:
> - `get_date`: step sets an environment variable to the current date in the format 'YYYY-MM-DD'.
> - `extract_sha1`: step fetches the latest SHA1 commit hash of the HEAD branch of the `microsoft/onnxruntime` repository and sets it as an environment variable.
> - `echo_sha1`: step prints the SHA1 commit hash set in step `extract_sha1`.
> - `actions/checkout@v3`: step checks out the codebase from the repository.
> - `update_file`: step updates a file in the repository with the SHA1 commit hash fetched in step `extract_sha1`.
> - `Make changes to pull request`: step uses the `peter-evans/create-pull-request` action to create a pull request.
For more details, please refer to the [sync-onnxrt-main.yaml](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/.github/workflows/sync-onnxrt-main.yaml) file in the repository.
---
name: migraphx name: migraphx
on: [push, pull_request] on:
pull_request:
push:
branches:
- develop
- master
- 'release/**'
jobs: jobs:
cancel: cancel:
...@@ -16,43 +23,33 @@ jobs: ...@@ -16,43 +23,33 @@ jobs:
steps: steps:
- name: Free space - name: Free space
run: | run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
du . --max-depth=1 -h
ls -la
cd /usr/local
du . --max-depth=1 -h
ls -la
cd /usr/local/lib
echo $(pwd)
du . --max-depth=1 -h
ls -la
- uses: actions/checkout@v3 - uses: actions/checkout@v3
# In this step, this action saves a list of existing images, # In this step, this action saves a list of existing images,
# the cache is created without them in the post run. # the cache is created without them in the post run.
# It also restores the cache if it exists. # It also restores the cache if it exists.
# name: Docker Layer Caching2 - name: Docker layer cache
- uses: jpribyl/action-docker-layer-caching@v0.1.1 uses: jpribyl/action-docker-layer-caching@v0.1.1
with:
key: docker-layer-caching-migraphx-${{hashFiles('hip-clang.docker', '**/*requirements.txt', '**/install_prereqs.sh', 'rbuild.ini')}}
restore-keys:
docker-layer-caching-migraphx-
# Ignore the failure of a step and avoid terminating the job. # Ignore the failure of a step and avoid terminating the job.
continue-on-error: true continue-on-error: true
- name: Prepare timestamp - name: Restore cache files for tidy
id: cache_timestamp uses: actions/cache/restore@v3
shell: bash id: tidy_restore
run: echo timestamp="$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_OUTPUT
- name: Cache files for tidy
uses: pat-s/always-upload-cache@v3.0.11
with: with:
path: tidy-cache path: tidy-cache
key: tidy-cache-${{ steps.cache_timestamp.outputs.timestamp }} key: tidy-cache-${{ github.ref }}
restore-keys: | restore-keys: tidy-cache-
tidy-cache-${{ steps.cache_timestamp.outputs.timestamp }}
tidy-cache-
- name: Build the Docker image - name: Build the Docker image
run: docker build . --file hip-clang.docker --tag migraphx run: |
docker build . --file hip-clang.docker --tag migraphx
- name: Clang tidy - name: Clang tidy
shell: bash -c "docker run -i -v=$GITHUB_WORKSPACE:/data -w /data migraphx bash < {0}" shell: bash -c "docker run -i -v=$GITHUB_WORKSPACE:/data -w /data migraphx bash < {0}"
...@@ -70,34 +67,53 @@ jobs: ...@@ -70,34 +67,53 @@ jobs:
.. ..
make -j2 -k onnx-proto tf-proto tidy make -j2 -k onnx-proto tf-proto tidy
# GH actions can not update existing cache, as a workaround clear cache and then save it
- name: Clear tidy cache before saving
if: ${{ steps.tidy_restore.outputs.cache-hit }}
shell: bash
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh extension install actions/gh-actions-cache --pin v1.0.1
gh actions-cache delete ${{ steps.tidy_restore.outputs.cache-matched-key }} --confirm
continue-on-error: true
- name: Save cache files for tidy
uses: actions/cache/save@v3
if: always()
with:
path: tidy-cache
key: tidy-cache-${{ github.ref }}
cppcheck: cppcheck:
runs-on: ubuntu-20.04 runs-on: ubuntu-20.04
steps: steps:
- name: Free space - name: Free space
run: sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
- uses: actions/checkout@v3 - uses: actions/checkout@v3
# In this step, this action saves a list of existing images, # In this step, this action saves a list of existing images,
# the cache is created without them in the post run. # the cache is created without them in the post run.
# It also restores the cache if it exists. # It also restores the cache if it exists.
- uses: jpribyl/action-docker-layer-caching@v0.1.1 - name: Docker layer cache
uses: jpribyl/action-docker-layer-caching@v0.1.1
with:
key: docker-layer-caching-migraphx-${{hashFiles('hip-clang.docker', '**/*requirements.txt', '**/install_prereqs.sh', 'rbuild.ini')}}
restore-keys:
docker-layer-caching-migraphx-
# Ignore the failure of a step and avoid terminating the job. # Ignore the failure of a step and avoid terminating the job.
continue-on-error: true continue-on-error: true
- name: Prepare timestamp - name: Restore cache files for cppcheck
id: cache_timestamp id: cppcheck_restore
shell: bash uses: actions/cache/restore@v3
run: echo timestamp="$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_OUTPUT
- name: Cache files for cppcheck
uses: pat-s/always-upload-cache@v2.1.3
with: with:
path: cppcheck-cache path: cppcheck-cache
key: cppcheck-cache-${{ hashFiles('cppcheck.rules', 'CMakeLists.txt') }}-${{ steps.cache_timestamp.outputs.timestamp }} key: cppcheck-cache-${{ hashFiles('cppcheck.rules', 'CMakeLists.txt') }}-${{ github.ref }}
restore-keys: | restore-keys: cppcheck-cache-${{ hashFiles('cppcheck.rules', 'CMakeLists.txt') }}-
cppcheck-cache-${{ hashFiles('cppcheck.rules', 'CMakeLists.txt') }}-${{ steps.cache_timestamp.outputs.timestamp }}
cppcheck-cache-${{ hashFiles('cppcheck.rules', 'CMakeLists.txt') }}-
- name: Build the Docker image - name: Build the Docker image
run: docker build . --file hip-clang.docker --tag migraphx run: docker build . --file hip-clang.docker --tag migraphx
...@@ -114,18 +130,43 @@ jobs: ...@@ -114,18 +130,43 @@ jobs:
.. ..
make -j2 cppcheck make -j2 cppcheck
# GH actions can not update existing cache, as a workaround clear cache and then save it
- name: Clear cppcheck cache before saving
if: ${{ steps.cppcheck_restore.outputs.cache-hit }}
shell: bash
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh extension install actions/gh-actions-cache --pin v1.0.1
gh actions-cache delete ${{ steps.cppcheck_restore.outputs.cache-matched-key }} --confirm
continue-on-error: true
- name: Save cache files for cppcheck
uses: actions/cache/save@v3
if: always()
with:
path: cppcheck-cache
key: cppcheck-cache-${{ hashFiles('cppcheck.rules', 'CMakeLists.txt') }}-${{ github.ref }}
format: format:
runs-on: ubuntu-20.04 runs-on: ubuntu-20.04
steps: steps:
- name: Free space - name: Free space
run: sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
- uses: actions/checkout@v3 - uses: actions/checkout@v3
# In this step, this action saves a list of existing images, # In this step, this action saves a list of existing images,
# the cache is created without them in the post run. # the cache is created without them in the post run.
# It also restores the cache if it exists. # It also restores the cache if it exists.
- uses: jpribyl/action-docker-layer-caching@v0.1.1 - name: Docker layer cache
uses: jpribyl/action-docker-layer-caching@v0.1.1
with:
key: docker-layer-caching-migraphx-${{hashFiles('hip-clang.docker', '**/*requirements.txt', '**/install_prereqs.sh', 'rbuild.ini')}}
restore-keys:
docker-layer-caching-migraphx-
# Ignore the failure of a step and avoid terminating the job. # Ignore the failure of a step and avoid terminating the job.
continue-on-error: true continue-on-error: true
...@@ -155,7 +196,8 @@ jobs: ...@@ -155,7 +196,8 @@ jobs:
steps: steps:
- name: Free space - name: Free space
run: sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
...@@ -176,7 +218,8 @@ jobs: ...@@ -176,7 +218,8 @@ jobs:
steps: steps:
- name: Free space - name: Free space
run: sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
...@@ -206,8 +249,13 @@ jobs: ...@@ -206,8 +249,13 @@ jobs:
- codecov - codecov
steps: steps:
- name: Free space - name: Free space and install rbuild, lld
run: sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
sudo apt-get install -y lld
python -m pip install --upgrade pip
pip install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
...@@ -217,36 +265,25 @@ jobs: ...@@ -217,36 +265,25 @@ jobs:
# Ignore the failure of a step and avoid terminating the job. # Ignore the failure of a step and avoid terminating the job.
continue-on-error: true continue-on-error: true
uses: actions/cache@v3 uses: actions/cache@v3
id: deps_cache
with: with:
# This path is specific to Ubuntu # This path is specific to Ubuntu
path: ${{ github.workspace }}/cget path: ${{ github.workspace }}/cget
# Look to see if there is a cache hit for the corresponding requirements file # Look to see if there is a cache hit for the corresponding requirements file
key: key: ${{ matrix.os }}-cget-4-${{ hashFiles('requirements.txt', 'dev-requirements.txt') }}
${{ matrix.os }}-cget-4-${{ hashFiles('requirements.txt', 'dev-requirements.txt') }} restore-keys: ${{ matrix.os }}-cget-4-
${{ matrix.os }}-cget-4-
- name: Install dependencies - name: Install dependencies
run: | if: steps.deps_cache.outputs.cache-hit != 'true'
python -m pip install --upgrade pip run: rbuild prepare -d cget -s gh
pip install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
rbuild prepare -d cget -s gh
sudo apt-get install -y lld
- name: Prepare timestamp
id: cache_timestamp
shell: bash
run: echo timestamp="$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_OUTPUT
- name: Cache files for ccache - name: Restore cache files for ccache
# Ignore the failure of a step and avoid terminating the job. uses: actions/cache/restore@v3
continue-on-error: true id: ccache_restore
uses: pat-s/always-upload-cache@v2.1.3
with: with:
path: ccache path: ${{ github.workspace }}/ccache
key: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ steps.cache_timestamp.outputs.timestamp }} key: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ github.ref }}
restore-keys: | restore-keys: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-
${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ steps.cache_timestamp.outputs.timestamp }}
${{ matrix.os }}-${{ matrix.configuration }}-ccache-
- name: Build and test - name: Build and test
env: env:
...@@ -266,6 +303,23 @@ jobs: ...@@ -266,6 +303,23 @@ jobs:
-DCMAKE_SHARED_LINKER_FLAGS='-fuse-ld=lld' -DCMAKE_SHARED_LINKER_FLAGS='-fuse-ld=lld'
${{ github.workspace }}/cget/bin/ccache -s ${{ github.workspace }}/cget/bin/ccache -s
# GH actions can not update existing cache, as a workaround clear cache and then save it
- name: Clear ccache cache before saving
if: ${{ steps.ccache_restore.outputs.cache-hit }}
shell: bash
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh extension install actions/gh-actions-cache --pin v1.0.1
gh actions-cache delete ${{ steps.ccache_restore.outputs.cache-matched-key }} --confirm
- name: Save cache files for ccache
uses: actions/cache/save@v3
if: always()
with:
path: ${{ github.workspace }}/ccache
key: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ github.ref }}
- name: Upload code coverage - name: Upload code coverage
if: "matrix.configuration == 'codecov'" if: "matrix.configuration == 'codecov'"
env: env:
...@@ -303,12 +357,14 @@ jobs: ...@@ -303,12 +357,14 @@ jobs:
steps: steps:
- name: Free space - name: Free space
run: sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku run: |
sudo rm -rf /usr/local/android /usr/share/dotnet /usr/local/share/boost /opt/ghc /usr/local/share/chrom* /usr/share/swift /usr/local/julia* /usr/local/lib/android /usr/local/graalvm /usr/local/aws* /usr/local/lib/heroku /opt/az /opt/hostedtoolcache/Ruby /opt/hostedtoolcache/node /opt/hostedtoolcache/go /opt/hostedtoolcache/Python/2* /opt/hostedtoolcache/CodeQL /opt/microsoft /etc/skel /home/runner/.rustup/toolchains
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: 3.7 python-version: 3.7
- name: Cache dependencies - name: Cache dependencies
# Ignore the failure of a step and avoid terminating the job. # Ignore the failure of a step and avoid terminating the job.
continue-on-error: true continue-on-error: true
...@@ -317,9 +373,8 @@ jobs: ...@@ -317,9 +373,8 @@ jobs:
# This path is specific to Ubuntu # This path is specific to Ubuntu
path: ${{ github.workspace }}/cget path: ${{ github.workspace }}/cget
# Look to see if there is a cache hit for the corresponding requirements file # Look to see if there is a cache hit for the corresponding requirements file
key: key: ${{ matrix.os }}-cget-4-${{ hashFiles('requirements.txt', 'dev-requirements.txt') }}
${{ matrix.os }}-cget-4-${{ hashFiles('requirements.txt', 'dev-requirements.txt') }} restore-keys: ${{ matrix.os }}-cget-4-
${{ matrix.os }}-cget-4-
- name: Install dependencies - name: Install dependencies
...@@ -328,22 +383,15 @@ jobs: ...@@ -328,22 +383,15 @@ jobs:
pip install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz pip install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
rbuild prepare -d cget -s gh rbuild prepare -d cget -s gh
sudo apt-get install -y lld sudo apt-get install -y lld
- name: Prepare timestamp
id: cache_timestamp
shell: bash
run: echo timestamp="$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_OUTPUT
- name: Cache files for ccache - name: Restore cache files for ccache
# Ignore the failure of a step and avoid terminating the job. id: ccache_restore_fpga
continue-on-error: true uses: actions/cache/restore@v3
uses: pat-s/always-upload-cache@v2.1.3
with: with:
path: ccache path: ${{ github.workspace }}/ccache
key: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ steps.cache_timestamp.outputs.timestamp }} key: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ github.ref }}
restore-keys: | restore-keys: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-
${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ steps.cache_timestamp.outputs.timestamp }}
${{ matrix.os }}-${{ matrix.configuration }}-ccache-
- name: Build and test - name: Build and test
env: env:
CMAKE_PREFIX_PATH: ${{ github.workspace }}/cget CMAKE_PREFIX_PATH: ${{ github.workspace }}/cget
...@@ -363,17 +411,36 @@ jobs: ...@@ -363,17 +411,36 @@ jobs:
-DMIGRAPHX_ENABLE_FPGA=On -DMIGRAPHX_ENABLE_FPGA=On
${{ github.workspace }}/cget/bin/ccache -s ${{ github.workspace }}/cget/bin/ccache -s
#- name: Upload code coverage # this is a workaround, with GH actions can not update existing cache
# if: "matrix.configuration == 'codecov'" - name: Clear ccache cache before saving
# env: if: ${{ steps.ccache_restore_fpga.outputs.cache-hit }}
# CODECOV_TOKEN: "8545af1c-f90b-4345-92a5-0d075503ca56" shell: bash
# run: | env:
# sudo apt-get install -y lcov GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# cd build run: |
# lcov --directory . --capture --output-file $(pwd)/coverage.info gh extension install actions/gh-actions-cache
# lcov --remove $(pwd)/coverage.info '/usr/*' --output-file $(pwd)/coverage.info gh actions-cache delete ${{ steps.ccache_restore_fpga.outputs.cache-matched-key }} --confirm
# lcov --list $(pwd)/coverage.info continue-on-error: true
# curl -Os https://uploader.codecov.io/latest/linux/codecov
# chmod +x codecov - name: Save cache files for ccache
# ./codecov -t ${CODECOV_TOKEN} uses: actions/cache/save@v3
# echo "Uploaded" if: always()
with:
path: ${{ github.workspace }}/ccache
key: ${{ matrix.os }}-${{ matrix.configuration }}-ccache-${{ github.ref }}
#- name: Upload code coverage
# if: "matrix.configuration == 'codecov'"
# env:
# CODECOV_TOKEN: "8545af1c-f90b-4345-92a5-0d075503ca56"
# run: |
# sudo apt-get install -y lcov
# cd build
# lcov --directory . --capture --output-file $(pwd)/coverage.info
# lcov --remove $(pwd)/coverage.info '/usr/*' --output-file $(pwd)/coverage.info
# lcov --list $(pwd)/coverage.info
# curl -Os https://uploader.codecov.io/latest/linux/codecov
# chmod +x codecov
# ./codecov -t ${CODECOV_TOKEN}
# echo "Uploaded"
name: Cleanup caches of closed PR
on:
pull_request:
types:
- closed
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache --pin v1.0.1
REPO=${{ github.repository }}
BRANCH="refs/pull/${{ github.event.pull_request.number }}/merge"
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 | tail -n +3)
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
...@@ -33,6 +33,10 @@ on: ...@@ -33,6 +33,10 @@ on:
description: If model in performance test script passes this threshold, it will be skipped description: If model in performance test script passes this threshold, it will be skipped
required: true required: true
default: '30m' default: '30m'
performance_backup_repo:
description: Repository for backup
required: true
default: migraphx-benchmark/performance-backup
flags: flags:
description: -m for Max value; -s for Std dev; -r for Threshold file description: -m for Max value; -s for Std dev; -r for Threshold file
required: true required: true
...@@ -48,6 +52,7 @@ jobs: ...@@ -48,6 +52,7 @@ jobs:
result_number: ${{ github.event.inputs.result_number || '10' }} result_number: ${{ github.event.inputs.result_number || '10' }}
flags: ${{ github.event.inputs.flags || '-r' }} flags: ${{ github.event.inputs.flags || '-r' }}
performance_reports_repo: ${{ github.event.inputs.performance_reports_repo || 'ROCmSoftwarePlatform/migraphx-reports' }} performance_reports_repo: ${{ github.event.inputs.performance_reports_repo || 'ROCmSoftwarePlatform/migraphx-reports' }}
performance_backup_repo: ${{ github.event.inputs.performance_backup_repo || 'migraphx-benchmark/performance-backup' }}
benchmark_utils_repo: ${{ github.event.inputs.benchmark_utils_repo || 'ROCmSoftwarePlatform/migraphx-benchmark-utils' }} benchmark_utils_repo: ${{ github.event.inputs.benchmark_utils_repo || 'ROCmSoftwarePlatform/migraphx-benchmark-utils' }}
organization: ${{ github.event.inputs.organization || 'AMD' }} organization: ${{ github.event.inputs.organization || 'AMD' }}
model_timeout: ${{ github.event.inputs.model_timeout || '30m' }} model_timeout: ${{ github.event.inputs.model_timeout || '30m' }}
......
...@@ -10,12 +10,38 @@ on: ...@@ -10,12 +10,38 @@ on:
description: Repository for benchmark utils description: Repository for benchmark utils
required: true required: true
default: 'ROCmSoftwarePlatform/migraphx-benchmark-utils' default: 'ROCmSoftwarePlatform/migraphx-benchmark-utils'
base_image:
description: Base image for rocm Docker build
required: true
default: "rocm/dev-ubuntu-20.04"
docker_image:
description: Docker image name for rocm Docker build
required: true
default: "rocm-migraphx"
build_navi:
description: Build navi number
required: true
default: "0"
organization:
type: string
description: Organization based on which location of files will be different
required: true
default: "AMD"
overwrite:
type: boolean
description: Overwrite image if it already exists
required: true
jobs: jobs:
release: release:
uses: ROCmSoftwarePlatform/migraphx-benchmark/.github/workflows/rocm-release.yml@main uses: ROCmSoftwarePlatform/migraphx-benchmark/.github/workflows/rocm-release.yml@main
with: with:
rocm_release: ${{ github.event.inputs.rocm_release }} rocm_release: ${{ github.event.inputs.rocm_release || '5.1' }}
benchmark-utils_repo: ${{ github.event.inputs.benchmark-utils_repo || 'ROCmSoftwarePlatform/migraphx-benchmark-utils' }} benchmark-utils_repo: ${{ github.event.inputs.benchmark-utils_repo || 'ROCmSoftwarePlatform/migraphx-benchmark-utils' }}
organization: ${{ github.event.inputs.organization || 'AMD' }}
base_image: ${{ github.event.inputs.base_image || 'rocm/dev-ubuntu-20.04' }}
docker_image: ${{ github.event.inputs.docker_image || 'rocm-migraphx' }}
build_navi: ${{ github.event.inputs.build_navi || '0' }}
overwrite: ${{ github.event.inputs.overwrite == 'true' }}
secrets: secrets:
gh_token: ${{ secrets.MIGRAPHX_BOT_TOKEN }} gh_token: ${{ secrets.MIGRAPHX_BOT_TOKEN }}
name: Onnxruntime main weekly sync name: Onnxruntime main weekly sync
on: on:
schedule: schedule:
- cron: '07 17 * * 5' - cron: '07 17 * * 5'
......
...@@ -69,7 +69,7 @@ include(ROCMSetupVersion) ...@@ -69,7 +69,7 @@ include(ROCMSetupVersion)
option(BUILD_DEV "Build for development purpose only" OFF) option(BUILD_DEV "Build for development purpose only" OFF)
rocm_setup_version(VERSION 2.6.0) rocm_setup_version(VERSION 2.7.0)
set(MIGRAPHX_SO_VERSION ${PROJECT_VERSION_MAJOR}.${PROJECT_VERSION_MINOR}.${PROJECT_VERSION_PATCH}) set(MIGRAPHX_SO_VERSION ${PROJECT_VERSION_MAJOR}.${PROJECT_VERSION_MINOR}.${PROJECT_VERSION_PATCH})
option( BUILD_SHARED_LIBS "Build as a shared library" ON ) option( BUILD_SHARED_LIBS "Build as a shared library" ON )
......
...@@ -110,7 +110,7 @@ RUN git clone --single-branch --branch ${ONNXRUNTIME_BRANCH} --recursive ${ONNXR ...@@ -110,7 +110,7 @@ RUN git clone --single-branch --branch ${ONNXRUNTIME_BRANCH} --recursive ${ONNXR
ADD tools/build_and_test_onnxrt.sh /onnxruntime/build_and_test_onnxrt.sh ADD tools/build_and_test_onnxrt.sh /onnxruntime/build_and_test_onnxrt.sh
RUN cget -p /usr/local install ROCmSoftwarePlatform/rocMLIR@acb727b348086b58a7f261b32c0e4f0686a4c0ee -DBUILD_MIXR_TARGET=On -DLLVM_ENABLE_ZSTD=Off -DLLVM_ENABLE_THREADS=Off RUN cget -p /usr/local install ROCmSoftwarePlatform/rocMLIR@a997d5f51314b45d7a4c04f1599966dcf53f9b4d -DBUILD_MIXR_TARGET=On -DLLVM_ENABLE_ZSTD=Off -DLLVM_ENABLE_THREADS=Off
ENV MIOPEN_FIND_DB_PATH=/tmp/miopen/find-db ENV MIOPEN_FIND_DB_PATH=/tmp/miopen/find-db
ENV MIOPEN_USER_DB_PATH=/tmp/miopen/user-db ENV MIOPEN_USER_DB_PATH=/tmp/miopen/user-db
......
...@@ -56,7 +56,7 @@ build MIGraphX. The specific steps are as follows: ...@@ -56,7 +56,7 @@ build MIGraphX. The specific steps are as follows:
1) Install rocm-cmake, pip3, rocblas, and miopen-hip with the command 1) Install rocm-cmake, pip3, rocblas, and miopen-hip with the command
``` ```
sudo apt update && sudo apt install -y rocm-cmake python3-pip rocblas miopen-hip sudo apt install -y rocm-cmake python3-pip rocblas miopen-hip
``` ```
2) Install [rbuild](https://github.com/RadeonOpenCompute/rbuild) (sudo may be required here.) 2) Install [rbuild](https://github.com/RadeonOpenCompute/rbuild) (sudo may be required here.)
...@@ -68,14 +68,11 @@ pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz ...@@ -68,14 +68,11 @@ pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
3) Build MIGraphX source code 3) Build MIGraphX source code
``` ```
rbuild build -d depend -B build --cxx=/opt/rocm/llvm/bin/clang++ rbuild build -d depend -B build
``` ```
then all the prerequisites are in the folder `depend`, and MIGraphX is built in the `build` directory. then all the prerequisites are in the folder `depend`, and MIGraphX is built in the `build` directory.
Note that for ROCm3.7 and later releases, Ubuntu 18.04 or later releases are needed.
Upgrade to Ubuntu 18.04 is available at [Upgrade Ubuntu to 18.04](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Upgrade-to-Ubuntu-18.04-for-ROCM3.7-or-later-releases)
Also note that you may meet the error of `rbuild: command not found`. It is because rbuild is installed Also note that you may meet the error of `rbuild: command not found`. It is because rbuild is installed
at `$HOME/.local/bin`, which is not in `PATH`. You can either export PATH as `export PATH=$HOME/.local/bin:$PATH` at `$HOME/.local/bin`, which is not in `PATH`. You can either export PATH as `export PATH=$HOME/.local/bin:$PATH`
to add the folder to `PATH` or add the option `--prefix /usr/local` in the pip3 command when installing rbuild. to add the folder to `PATH` or add the option `--prefix /usr/local` in the pip3 command when installing rbuild.
...@@ -89,7 +86,7 @@ If using this approach, we need to install the prerequisites, configure the cmak ...@@ -89,7 +86,7 @@ If using this approach, we need to install the prerequisites, configure the cmak
For convenience, the prerequisites can be built automatically with rbuild as: For convenience, the prerequisites can be built automatically with rbuild as:
``` ```
rbuild build -d depend --cxx=/opt/rocm/llvm/bin/clang++ rbuild prepare -d depend
``` ```
then all the prerequisites are in the folder `depend`, and they can be used in the `cmake` configuration then all the prerequisites are in the folder `depend`, and they can be used in the `cmake` configuration
...@@ -174,7 +171,6 @@ To install: ...@@ -174,7 +171,6 @@ To install:
dpkg -i <path_to_deb_file> dpkg -i <path_to_deb_file>
``` ```
### Calling MIGraphX APIs ### Calling MIGraphX APIs
To use MIGraphX's C/C++ API in your cmake project, we need to set `CMAKE_PREFIX_PATH` to the MIGraphX To use MIGraphX's C/C++ API in your cmake project, we need to set `CMAKE_PREFIX_PATH` to the MIGraphX
installation location and then do installation location and then do
...@@ -184,8 +180,24 @@ target_link_libraries(myApp migraphx::c) ...@@ -184,8 +180,24 @@ target_link_libraries(myApp migraphx::c)
``` ```
Where `myApp` is the cmake target in your project. Where `myApp` is the cmake target in your project.
## Building for development
Using rbuild, the dependencies for development can be installed with:
```
rbuild develop
```
This will install the dependencies for development into the `deps` directory and
configure `cmake` to use those dependencies in the `build` directory. These
directories can be changed by passing the `--deps-dir` and `--build-dir` flags
to `rbuild` command:
```
rbuild develop --build-dir build_rocm_55 --deps-dir /home/user/deps_dir
```
### Building the documentation ## Building the documentation
HTML and PDF documentation can be built using: HTML and PDF documentation can be built using:
......
...@@ -32,10 +32,6 @@ Disable fast math optimization ...@@ -32,10 +32,6 @@ Disable fast math optimization
Perform an exhaustive search to find the fastest version of generated kernels for selected backend Perform an exhaustive search to find the fastest version of generated kernels for selected backend
.. options:: --split-single-dyn-dim
Enable the split single dynamic dimension pass
.. option:: --fp16 .. option:: --fp16
Quantize for fp16 Quantize for fp16
......
...@@ -6,9 +6,9 @@ Python Reference ...@@ -6,9 +6,9 @@ Python Reference
shape shape
----- -----
.. py:class:: shape(type, lens, strides=None) .. py:class:: shape(type, lens, strides=None, dyn_dims)
Describes the shape of a tensor. This includes size, layout, and data type/ Describes the shape of a tensor. This includes size, layout, and data type. Can be a dynamic shape by using dyn_dims.
.. py:method:: type() .. py:method:: type()
...@@ -34,6 +34,12 @@ shape ...@@ -34,6 +34,12 @@ shape
:rtype: int :rtype: int
.. py:method:: dyn_dims()
The dynamic dimensions of the shape.
:rtype: list[dynamic_dimension]
.. py:method:: bytes() .. py:method:: bytes()
The number of bytes the shape uses. The number of bytes the shape uses.
...@@ -46,6 +52,12 @@ shape ...@@ -46,6 +52,12 @@ shape
:rtype: int :rtype: int
.. py:method:: ndim()
The number of dimensions for the shape.
:rtype: int
.. py:method:: packed() .. py:method:: packed()
Returns true if the shape is packed. Returns true if the shape is packed.
...@@ -64,6 +76,12 @@ shape ...@@ -64,6 +76,12 @@ shape
:rtype: bool :rtype: bool
.. py:method:: dynamic()
Returns true if the shape is dynamic.
:rtype: bool
.. py:method:: standard() .. py:method:: standard()
Returns true if the shape is a standard shape. That is, the shape is both packed and not transposed. Returns true if the shape is a standard shape. That is, the shape is both packed and not transposed.
...@@ -76,6 +94,18 @@ shape ...@@ -76,6 +94,18 @@ shape
:rtype: bool :rtype: bool
dynamic_dimension
--------
.. py:class:: dynamic_dimension(min, max, optimals)
Construct a dynamic_dimension from a minimum, a maximum, and optionally a set of optimals.
.. py:method:: is_fixed()
Returns true if the dynamic_dimension is fixed.
:rtype : int
argument argument
-------- --------
...@@ -292,8 +322,10 @@ parse_onnx ...@@ -292,8 +322,10 @@ parse_onnx
Load and parse an onnx file. Load and parse an onnx file.
:param str filename: Path to file. :param str filename: Path to file.
:param str default_dim_value: default batch size to use (if not specified in onnx file). :param str default_dim_value: default dimension to use (if not specified in onnx file).
:param dynamic_dimension default_dyn_dim_value: default dynamic_dimension value to use.
:param str map_input_dims: Explicitly specify the dims of an input. :param str map_input_dims: Explicitly specify the dims of an input.
:param list[dynamic_dimension] map_dyn_input_dims: Explicitly specify the dynamic_dimensions of an input.
:param str skip_unknown_operators: Continue parsing onnx file if an unknown operator is found. :param str skip_unknown_operators: Continue parsing onnx file if an unknown operator is found.
:param str print_program_on_error: Print program if an error occurs. :param str print_program_on_error: Print program if an error occurs.
:param int max_loop_iterations: Maximum iteration number for the loop operator. :param int max_loop_iterations: Maximum iteration number for the loop operator.
......
#####################################################################################
# The MIT License (MIT)
#
# Copyright (c) 2015-2023 Advanced Micro Devices, Inc. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#####################################################################################
cmake_minimum_required(VERSION 3.5)
project (cpp_dynamic_batch)
set (CMAKE_CXX_STANDARD 14)
set (EXAMPLE dynamic_batch)
list (APPEND CMAKE_PREFIX_PATH /opt/rocm)
find_package (migraphx)
message("source file: " ${EXAMPLE}.cpp " ---> bin: " ${EXAMPLE})
add_executable(${EXAMPLE} ${EXAMPLE}.cpp)
target_link_libraries(${EXAMPLE} migraphx::c)
# Running ONNX model with dynamic batch
## Description
This examples demonstrates how to run a graph program with dynamic batch using the MIGraphX C++ API.
## Creating dynamic dimension objects
`dynamic_dimension` objects are used in MIGraphX to specify a range of dimension values from a minimum value to a maximum value and optimal values that the tensor can be at model evaluation time.
A dynamic shape is defined by a list of `dynamic_dimensions` while a static shape only has fixed dimension values.
For example, a `dynamic_dimension` with `{min:1, max:10, optimals:{1, 4, 10}}` means that the dimension can be any value from 1 through 10 with the optimal values being 1, 4, and 10.
Supplied optimal values may allow MIGraphX to optimize the program for those specific shapes.
A fixed `dynamic_dimension` can be specified by setting the `min` and `max` to the same value (ex. `{min:3, max:3}`).
A dynamic shape specified solely by fixed `dynamic_dimension` objects will be converted to a static shape during parsing.
This can be useful for setting a static shape using the `set_dyn_input_parameter_shape()` method discussed later in this document.
## Parsing
ONNX graphs [ONNX](https://onnx.ai/get-started.html) can be parsed by MIGraphX to create a runnable program with dynamic batch sizes.
The dynamic batch range must be specified by a `dynamic_dimension` object.
One method to set the `dynamic_dimension` object works for ONNX files that only have symbolic variables for the batch dimensions:
```
migraphx::program p;
migraphx::onnx_options options;
options.set_default_dyn_dim_value(migraphx::dynamic_dimension{1, 4, {2, 4}});
p = parse_onnx(input_file, options);
```
Another option that can run any ONNX model with dynamic batch sizes uses the dynamic input map where the entire shape of the input parameter is supplied:
```
migraphx::program p;
migraphx::onnx_options options;
migraphx::dynamic_dimensions dyn_dims = {migraphx::dynamic_dimension{1, 4, {2, 4}},
migraphx::dynamic_dimension{3, 3},
migraphx::dynamic_dimension{4, 4},
migraphx::dynamic_dimension{4, 4}};
options.set_dyn_input_parameter_shape("input", dyn_dims);
p = parse_onnx(input_file, options);
```
## Compiling
Currently the MIGraphX C/C++ API requires that `offload_copy` be enabled for compiling dynamic batch programs.
Here is a snippet of compiling a model with `offload_copy` enabled:
```
migraphx::compile_options c_options;
c_options.set_offload_copy();
p.compile(migraphx::target("gpu"), c_options);
```
where `p` is the `migraphx::program`.
## Saving and Loading
A dynamic batch MIGraphX program can be saved and loaded to/from a MXR file the same way as a fully static shape program.
## Executing the dynamic batch model
The compiled dynamic batch model can be executed the same way as a static model by supplying the input data as `arguments` in a `program_parameters` object.
## Running the Example
Your ROCm installation could be installed in a location other than the one specified in the CMakeLists.txt.
You can set `LD_LIBRARY_PATH` or `CMAKE_PREFIX_PATH` to that location so that this program can still build.
The provided example is [`dynamic_batch.cpp`](./dynamic_batch.cpp)
To compile and run the example from this directory:
```
$ mkdir build
$ cd build
$ cmake ..
$ make
```
There will now be an executable named `dynamic_batch` with the following usage:
```
$ ./dynamic_batch
```
/*
* The MIT License (MIT)
*
* Copyright (c) 2015-2023 Advanced Micro Devices, Inc. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <algorithm>
// MIGraphX C++ API
#include <migraphx/migraphx.hpp>
int main(int argc, char** argv)
{
migraphx::onnx_options o_options;
migraphx::dynamic_dimensions dyn_dims = {migraphx::dynamic_dimension{1, 4, {2, 4}},
migraphx::dynamic_dimension{3, 3},
migraphx::dynamic_dimension{4, 4},
migraphx::dynamic_dimension{5, 5}};
o_options.set_dyn_input_parameter_shape("0", dyn_dims);
auto p = migraphx::parse_onnx("../add_scalar_test.onnx", o_options);
migraphx::compile_options c_options;
c_options.set_offload_copy();
p.compile(migraphx::target("gpu"), c_options);
// batch size = 2
std::vector<uint8_t> a(2 * 3 * 4 * 5, 3);
std::vector<uint8_t> b = {2};
migraphx::program_parameters pp;
migraphx::shape s = migraphx::shape(migraphx_shape_uint8_type, {2, 3, 4, 5});
pp.add("0", migraphx::argument(s, a.data()));
pp.add("1", migraphx::argument(migraphx::shape(migraphx_shape_uint8_type, {1}, {0}), b.data()));
auto outputs = p.eval(pp);
auto result = outputs[0];
std::vector<uint8_t> c(2 * 3 * 4 * 5, 5);
if(bool{result == migraphx::argument(s, c.data())})
{
std::cout << "Successfully executed dynamic batch add\n";
}
else
{
std::cout << "Failed dynamic batch add\n";
}
return 0;
}
...@@ -53,7 +53,6 @@ See below for a comprehensive list of commands and option arguments, as well as ...@@ -53,7 +53,6 @@ See below for a comprehensive list of commands and option arguments, as well as
| --enable-offload-copy | Enable implicit offload copying | | --enable-offload-copy | Enable implicit offload copying |
| --disable-fast-math | Disable fast math optimization | | --disable-fast-math | Disable fast math optimization |
| --exhaustive-tune | Enable exhaustive search to find fastest kernel | | --exhaustive-tune | Enable exhaustive search to find fastest kernel |
| --split-single-dyn-dim | Enable split_single_dyn_dim compiler pass |
| --fp16 | Quantize for fp16 | | --fp16 | Quantize for fp16 |
| --int8 | Quantize for int8 | | --int8 | Quantize for int8 |
| --tolerance | Tolerance for errors | | --tolerance | Tolerance for errors |
......
...@@ -21,6 +21,6 @@ ...@@ -21,6 +21,6 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE. # THE SOFTWARE.
##################################################################################### #####################################################################################
tensorflow==2.9.3 tensorflow==2.11.1
onnxruntime onnxruntime
tokenizers tokenizers
\ No newline at end of file
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <migraphx/execution_environment.hpp> #include <migraphx/execution_environment.hpp>
#include <migraphx/migraphx.h> #include <migraphx/migraphx.h>
#include <migraphx/rank.hpp> #include <migraphx/rank.hpp>
#include <migraphx/ranges.hpp>
#include <migraphx/shape.hpp> #include <migraphx/shape.hpp>
#include <migraphx/program.hpp> #include <migraphx/program.hpp>
#include <migraphx/onnx.hpp> #include <migraphx/onnx.hpp>
...@@ -145,6 +146,11 @@ void set_default_dim_value(onnx_options& options, size_t value) ...@@ -145,6 +146,11 @@ void set_default_dim_value(onnx_options& options, size_t value)
options.default_dim_value = value; options.default_dim_value = value;
} }
void set_default_dyn_dim_value(onnx_options& options, const shape::dynamic_dimension& dd)
{
options.default_dyn_dim_value = dd;
}
void set_default_loop_iterations(onnx_options& options, int64_t value) void set_default_loop_iterations(onnx_options& options, int64_t value)
{ {
options.max_loop_iterations = value; options.max_loop_iterations = value;
...@@ -161,6 +167,13 @@ void set_input_parameter_shape(onnx_options& options, ...@@ -161,6 +167,13 @@ void set_input_parameter_shape(onnx_options& options,
options.map_input_dims[std::string(name)] = std::move(dims); options.map_input_dims[std::string(name)] = std::move(dims);
} }
void set_dyn_input_parameter_shape(onnx_options& options,
const char* name,
std::vector<shape::dynamic_dimension> dyn_dims)
{
options.map_dyn_input_dims[std::string(name)] = std::move(dyn_dims);
}
void set_input_parameter_shape(tf_options& options, const char* name, std::vector<std::size_t> dims) void set_input_parameter_shape(tf_options& options, const char* name, std::vector<std::size_t> dims)
{ {
options.map_input_dims[std::string(name)] = std::move(dims); options.map_input_dims[std::string(name)] = std::move(dims);
...@@ -187,6 +200,12 @@ std::vector<const char*> get_names(const std::unordered_map<std::string, Value>& ...@@ -187,6 +200,12 @@ std::vector<const char*> get_names(const std::unordered_map<std::string, Value>&
return result; return result;
} }
template <class T>
std::set<T> make_set(const T* x, std::size_t n)
{
return {x, x + n};
}
void quantize_fp16_with_op_names(program& prog, std::vector<std::string>& names) void quantize_fp16_with_op_names(program& prog, std::vector<std::string>& names)
{ {
if(names.empty()) if(names.empty())
...@@ -346,7 +365,10 @@ const Target* object_cast(const U* x) ...@@ -346,7 +365,10 @@ const Target* object_cast(const U* x)
template <class T, class... Ts, class Target = std::remove_pointer_t<T>> template <class T, class... Ts, class Target = std::remove_pointer_t<T>>
Target* allocate(Ts&&... xs) Target* allocate(Ts&&... xs)
{ {
return new Target(std::forward<Ts>(xs)...); // NOLINT if constexpr(std::is_aggregate<Target>{})
return new Target{std::forward<Ts>(xs)...}; // NOLINT
else
return new Target(std::forward<Ts>(xs)...); // NOLINT
} }
template <class T> template <class T>
...@@ -409,6 +431,39 @@ struct manage_generic_ptr ...@@ -409,6 +431,39 @@ struct manage_generic_ptr
D deleter = nullptr; D deleter = nullptr;
}; };
extern "C" struct migraphx_optimals;
struct migraphx_optimals
{
template <class... Ts>
migraphx_optimals(Ts&&... xs)
: object(std::forward<Ts>(xs)...) // NOLINT(readability-redundant-member-init)
{
}
std::set<size_t> object;
};
extern "C" struct migraphx_dynamic_dimension;
struct migraphx_dynamic_dimension
{
template <class... Ts>
migraphx_dynamic_dimension(Ts&&... xs)
: object(std::forward<Ts>(xs)...) // NOLINT(readability-redundant-member-init)
{
}
migraphx::shape::dynamic_dimension object;
};
extern "C" struct migraphx_dynamic_dimensions;
struct migraphx_dynamic_dimensions
{
template <class... Ts>
migraphx_dynamic_dimensions(Ts&&... xs)
: object(std::forward<Ts>(xs)...) // NOLINT(readability-redundant-member-init)
{
}
std::vector<migraphx::shape::dynamic_dimension> object;
};
extern "C" struct migraphx_shape; extern "C" struct migraphx_shape;
struct migraphx_shape struct migraphx_shape
{ {
...@@ -736,6 +791,152 @@ struct migraphx_experimental_custom_op ...@@ -736,6 +791,152 @@ struct migraphx_experimental_custom_op
} }
}; };
extern "C" migraphx_status migraphx_optimals_destroy(migraphx_optimals_t optimals)
{
auto api_error_result = migraphx::try_([&] { destroy((optimals)); });
return api_error_result;
}
extern "C" migraphx_status migraphx_optimals_assign_to(migraphx_optimals_t output,
const_migraphx_optimals_t input)
{
auto api_error_result = migraphx::try_([&] { *output = *input; });
return api_error_result;
}
extern "C" migraphx_status
migraphx_optimals_create(migraphx_optimals_t* optimals, const size_t* ptr, size_t size)
{
auto api_error_result = migraphx::try_([&] {
*optimals = object_cast<migraphx_optimals_t>(
allocate<std::set<size_t>>(migraphx::make_set<size_t>((ptr), (size))));
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimension_destroy(migraphx_dynamic_dimension_t dynamic_dimension)
{
auto api_error_result = migraphx::try_([&] { destroy((dynamic_dimension)); });
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimension_assign_to(migraphx_dynamic_dimension_t output,
const_migraphx_dynamic_dimension_t input)
{
auto api_error_result = migraphx::try_([&] { *output = *input; });
return api_error_result;
}
extern "C" migraphx_status migraphx_dynamic_dimension_create_min_max(
migraphx_dynamic_dimension_t* dynamic_dimension, size_t min, size_t max)
{
auto api_error_result = migraphx::try_([&] {
*dynamic_dimension = object_cast<migraphx_dynamic_dimension_t>(
allocate<migraphx::shape::dynamic_dimension>((min), (max)));
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimension_create_min_max_optimals(migraphx_dynamic_dimension_t* dynamic_dimension,
size_t min,
size_t max,
migraphx_optimals_t optimals)
{
auto api_error_result = migraphx::try_([&] {
if(optimals == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter optimals: Null pointer");
*dynamic_dimension = object_cast<migraphx_dynamic_dimension_t>(
allocate<migraphx::shape::dynamic_dimension>((min), (max), (optimals->object)));
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimension_is_fixed(bool* out, const_migraphx_dynamic_dimension_t dynamic_dimension)
{
auto api_error_result = migraphx::try_([&] {
if(dynamic_dimension == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param,
"Bad parameter dynamic_dimension: Null pointer");
*out = (dynamic_dimension->object).is_fixed();
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimension_equal(bool* out,
const_migraphx_dynamic_dimension_t dynamic_dimension,
const_migraphx_dynamic_dimension_t x)
{
auto api_error_result = migraphx::try_([&] {
if(dynamic_dimension == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param,
"Bad parameter dynamic_dimension: Null pointer");
if(x == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter x: Null pointer");
*out = migraphx::equal((dynamic_dimension->object), (x->object));
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimensions_destroy(migraphx_dynamic_dimensions_t dynamic_dimensions)
{
auto api_error_result = migraphx::try_([&] { destroy((dynamic_dimensions)); });
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimensions_assign_to(migraphx_dynamic_dimensions_t output,
const_migraphx_dynamic_dimensions_t input)
{
auto api_error_result = migraphx::try_([&] { *output = *input; });
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimensions_create(migraphx_dynamic_dimensions_t* dynamic_dimensions,
const_migraphx_dynamic_dimension_t* ptr,
size_t size)
{
auto api_error_result = migraphx::try_([&] {
*dynamic_dimensions = object_cast<migraphx_dynamic_dimensions_t>(
allocate<std::vector<migraphx::shape::dynamic_dimension>>(
migraphx::to_obj_vector<const_migraphx_dynamic_dimension_t>((ptr), (size))));
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimensions_size(size_t* out, migraphx_dynamic_dimensions_t dynamic_dimensions)
{
auto api_error_result = migraphx::try_([&] {
if(dynamic_dimensions == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param,
"Bad parameter dynamic_dimensions: Null pointer");
*out = (dynamic_dimensions->object).size();
});
return api_error_result;
}
extern "C" migraphx_status
migraphx_dynamic_dimensions_get(const_migraphx_dynamic_dimension_t* out,
migraphx_dynamic_dimensions_t dynamic_dimensions,
size_t idx)
{
auto api_error_result = migraphx::try_([&] {
if(dynamic_dimensions == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param,
"Bad parameter dynamic_dimensions: Null pointer");
*out = object_cast<const_migraphx_dynamic_dimension_t>(
&((dynamic_dimensions->object).at((idx))));
});
return api_error_result;
}
extern "C" migraphx_status migraphx_shape_destroy(migraphx_shape_t shape) extern "C" migraphx_status migraphx_shape_destroy(migraphx_shape_t shape)
{ {
auto api_error_result = migraphx::try_([&] { destroy((shape)); }); auto api_error_result = migraphx::try_([&] { destroy((shape)); });
...@@ -794,6 +995,19 @@ extern "C" migraphx_status migraphx_shape_create_scalar(migraphx_shape_t* shape, ...@@ -794,6 +995,19 @@ extern "C" migraphx_status migraphx_shape_create_scalar(migraphx_shape_t* shape,
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status migraphx_shape_create_dynamic(migraphx_shape_t* shape,
migraphx_shape_datatype_t type,
migraphx_dynamic_dimensions_t dims)
{
auto api_error_result = migraphx::try_([&] {
if(dims == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter dims: Null pointer");
*shape = object_cast<migraphx_shape_t>(
allocate<migraphx::shape>((migraphx::to_shape_type(type)), (dims->object)));
});
return api_error_result;
}
extern "C" migraphx_status extern "C" migraphx_status
migraphx_shape_lengths(const size_t** out, size_t* out_size, const_migraphx_shape_t shape) migraphx_shape_lengths(const size_t** out, size_t* out_size, const_migraphx_shape_t shape)
{ {
...@@ -824,6 +1038,17 @@ migraphx_shape_strides(const size_t** out, size_t* out_size, const_migraphx_shap ...@@ -824,6 +1038,17 @@ migraphx_shape_strides(const size_t** out, size_t* out_size, const_migraphx_shap
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status migraphx_shape_dyn_dims(migraphx_dynamic_dimensions_t* out,
const_migraphx_shape_t shape)
{
auto api_error_result = migraphx::try_([&] {
if(shape == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter shape: Null pointer");
*out = allocate<migraphx_dynamic_dimensions_t>((shape->object).dyn_dims());
});
return api_error_result;
}
extern "C" migraphx_status migraphx_shape_type(migraphx_shape_datatype_t* out, extern "C" migraphx_status migraphx_shape_type(migraphx_shape_datatype_t* out,
const_migraphx_shape_t shape) const_migraphx_shape_t shape)
{ {
...@@ -857,6 +1082,16 @@ extern "C" migraphx_status migraphx_shape_bytes(size_t* out, const_migraphx_shap ...@@ -857,6 +1082,16 @@ extern "C" migraphx_status migraphx_shape_bytes(size_t* out, const_migraphx_shap
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status migraphx_shape_ndim(size_t* out, const_migraphx_shape_t shape)
{
auto api_error_result = migraphx::try_([&] {
if(shape == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter shape: Null pointer");
*out = (shape->object).ndim();
});
return api_error_result;
}
extern "C" migraphx_status extern "C" migraphx_status
migraphx_shape_equal(bool* out, const_migraphx_shape_t shape, const_migraphx_shape_t x) migraphx_shape_equal(bool* out, const_migraphx_shape_t shape, const_migraphx_shape_t x)
{ {
...@@ -880,6 +1115,16 @@ extern "C" migraphx_status migraphx_shape_standard(bool* out, const_migraphx_sha ...@@ -880,6 +1115,16 @@ extern "C" migraphx_status migraphx_shape_standard(bool* out, const_migraphx_sha
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status migraphx_shape_dynamic(bool* out, const_migraphx_shape_t shape)
{
auto api_error_result = migraphx::try_([&] {
if(shape == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter shape: Null pointer");
*out = (shape->object).dynamic();
});
return api_error_result;
}
extern "C" migraphx_status migraphx_shape_index(size_t* out, const_migraphx_shape_t shape, size_t i) extern "C" migraphx_status migraphx_shape_index(size_t* out, const_migraphx_shape_t shape, size_t i)
{ {
auto api_error_result = migraphx::try_([&] { auto api_error_result = migraphx::try_([&] {
...@@ -915,6 +1160,17 @@ migraphx_argument_create(migraphx_argument_t* argument, const_migraphx_shape_t s ...@@ -915,6 +1160,17 @@ migraphx_argument_create(migraphx_argument_t* argument, const_migraphx_shape_t s
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status migraphx_argument_create_empty(migraphx_argument_t* argument,
const_migraphx_shape_t shape)
{
auto api_error_result = migraphx::try_([&] {
if(shape == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter shape: Null pointer");
*argument = object_cast<migraphx_argument_t>(allocate<migraphx::argument>((shape->object)));
});
return api_error_result;
}
extern "C" migraphx_status migraphx_argument_shape(const_migraphx_shape_t* out, extern "C" migraphx_status migraphx_argument_shape(const_migraphx_shape_t* out,
const_migraphx_argument_t argument) const_migraphx_argument_t argument)
{ {
...@@ -1590,6 +1846,19 @@ extern "C" migraphx_status migraphx_onnx_options_set_input_parameter_shape( ...@@ -1590,6 +1846,19 @@ extern "C" migraphx_status migraphx_onnx_options_set_input_parameter_shape(
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status migraphx_onnx_options_set_dyn_input_parameter_shape(
migraphx_onnx_options_t onnx_options, const char* name, migraphx_dynamic_dimensions_t dims)
{
auto api_error_result = migraphx::try_([&] {
if(onnx_options == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter onnx_options: Null pointer");
if(dims == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter dims: Null pointer");
migraphx::set_dyn_input_parameter_shape((onnx_options->object), (name), (dims->object));
});
return api_error_result;
}
extern "C" migraphx_status extern "C" migraphx_status
migraphx_onnx_options_set_default_dim_value(migraphx_onnx_options_t onnx_options, size_t value) migraphx_onnx_options_set_default_dim_value(migraphx_onnx_options_t onnx_options, size_t value)
{ {
...@@ -1601,6 +1870,20 @@ migraphx_onnx_options_set_default_dim_value(migraphx_onnx_options_t onnx_options ...@@ -1601,6 +1870,20 @@ migraphx_onnx_options_set_default_dim_value(migraphx_onnx_options_t onnx_options
return api_error_result; return api_error_result;
} }
extern "C" migraphx_status
migraphx_onnx_options_set_default_dyn_dim_value(migraphx_onnx_options_t onnx_options,
const_migraphx_dynamic_dimension_t dd)
{
auto api_error_result = migraphx::try_([&] {
if(onnx_options == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter onnx_options: Null pointer");
if(dd == nullptr)
MIGRAPHX_THROW(migraphx_status_bad_param, "Bad parameter dd: Null pointer");
migraphx::set_default_dyn_dim_value((onnx_options->object), (dd->object));
});
return api_error_result;
}
extern "C" migraphx_status extern "C" migraphx_status
migraphx_onnx_options_set_default_loop_iterations(migraphx_onnx_options_t onnx_options, migraphx_onnx_options_set_default_loop_iterations(migraphx_onnx_options_t onnx_options,
int64_t value) int64_t value)
......
...@@ -66,6 +66,15 @@ typedef enum ...@@ -66,6 +66,15 @@ typedef enum
} migraphx_shape_datatype_t; } migraphx_shape_datatype_t;
#undef MIGRAPHX_SHAPE_GENERATE_ENUM_TYPES #undef MIGRAPHX_SHAPE_GENERATE_ENUM_TYPES
typedef struct migraphx_optimals* migraphx_optimals_t;
typedef const struct migraphx_optimals* const_migraphx_optimals_t;
typedef struct migraphx_dynamic_dimension* migraphx_dynamic_dimension_t;
typedef const struct migraphx_dynamic_dimension* const_migraphx_dynamic_dimension_t;
typedef struct migraphx_dynamic_dimensions* migraphx_dynamic_dimensions_t;
typedef const struct migraphx_dynamic_dimensions* const_migraphx_dynamic_dimensions_t;
typedef struct migraphx_shape* migraphx_shape_t; typedef struct migraphx_shape* migraphx_shape_t;
typedef const struct migraphx_shape* const_migraphx_shape_t; typedef const struct migraphx_shape* const_migraphx_shape_t;
...@@ -157,6 +166,55 @@ typedef migraphx_status (*migraphx_experimental_custom_op_copy)(void** out, void ...@@ -157,6 +166,55 @@ typedef migraphx_status (*migraphx_experimental_custom_op_copy)(void** out, void
typedef migraphx_status (*migraphx_experimental_custom_op_delete)(void* input); typedef migraphx_status (*migraphx_experimental_custom_op_delete)(void* input);
migraphx_status migraphx_optimals_destroy(migraphx_optimals_t optimals);
migraphx_status migraphx_optimals_assign_to(migraphx_optimals_t output,
const_migraphx_optimals_t input);
migraphx_status
migraphx_optimals_create(migraphx_optimals_t* optimals, const size_t* ptr, size_t size);
migraphx_status migraphx_dynamic_dimension_destroy(migraphx_dynamic_dimension_t dynamic_dimension);
migraphx_status migraphx_dynamic_dimension_assign_to(migraphx_dynamic_dimension_t output,
const_migraphx_dynamic_dimension_t input);
migraphx_status migraphx_dynamic_dimension_create_min_max(
migraphx_dynamic_dimension_t* dynamic_dimension, size_t min, size_t max);
migraphx_status
migraphx_dynamic_dimension_create_min_max_optimals(migraphx_dynamic_dimension_t* dynamic_dimension,
size_t min,
size_t max,
migraphx_optimals_t optimals);
migraphx_status
migraphx_dynamic_dimension_is_fixed(bool* out,
const_migraphx_dynamic_dimension_t dynamic_dimension);
migraphx_status
migraphx_dynamic_dimension_equal(bool* out,
const_migraphx_dynamic_dimension_t dynamic_dimension,
const_migraphx_dynamic_dimension_t x);
migraphx_status
migraphx_dynamic_dimensions_destroy(migraphx_dynamic_dimensions_t dynamic_dimensions);
migraphx_status migraphx_dynamic_dimensions_assign_to(migraphx_dynamic_dimensions_t output,
const_migraphx_dynamic_dimensions_t input);
migraphx_status
migraphx_dynamic_dimensions_create(migraphx_dynamic_dimensions_t* dynamic_dimensions,
const_migraphx_dynamic_dimension_t* ptr,
size_t size);
migraphx_status migraphx_dynamic_dimensions_size(size_t* out,
migraphx_dynamic_dimensions_t dynamic_dimensions);
migraphx_status migraphx_dynamic_dimensions_get(const_migraphx_dynamic_dimension_t* out,
migraphx_dynamic_dimensions_t dynamic_dimensions,
size_t idx);
migraphx_status migraphx_shape_destroy(migraphx_shape_t shape); migraphx_status migraphx_shape_destroy(migraphx_shape_t shape);
migraphx_status migraphx_shape_assign_to(migraphx_shape_t output, const_migraphx_shape_t input); migraphx_status migraphx_shape_assign_to(migraphx_shape_t output, const_migraphx_shape_t input);
...@@ -176,23 +234,34 @@ migraphx_status migraphx_shape_create_with_strides(migraphx_shape_t* shape, ...@@ -176,23 +234,34 @@ migraphx_status migraphx_shape_create_with_strides(migraphx_shape_t* shape,
migraphx_status migraphx_shape_create_scalar(migraphx_shape_t* shape, migraphx_status migraphx_shape_create_scalar(migraphx_shape_t* shape,
migraphx_shape_datatype_t type); migraphx_shape_datatype_t type);
migraphx_status migraphx_shape_create_dynamic(migraphx_shape_t* shape,
migraphx_shape_datatype_t type,
migraphx_dynamic_dimensions_t dims);
migraphx_status migraphx_status
migraphx_shape_lengths(const size_t** out, size_t* out_size, const_migraphx_shape_t shape); migraphx_shape_lengths(const size_t** out, size_t* out_size, const_migraphx_shape_t shape);
migraphx_status migraphx_status
migraphx_shape_strides(const size_t** out, size_t* out_size, const_migraphx_shape_t shape); migraphx_shape_strides(const size_t** out, size_t* out_size, const_migraphx_shape_t shape);
migraphx_status migraphx_shape_dyn_dims(migraphx_dynamic_dimensions_t* out,
const_migraphx_shape_t shape);
migraphx_status migraphx_shape_type(migraphx_shape_datatype_t* out, const_migraphx_shape_t shape); migraphx_status migraphx_shape_type(migraphx_shape_datatype_t* out, const_migraphx_shape_t shape);
migraphx_status migraphx_shape_elements(size_t* out, const_migraphx_shape_t shape); migraphx_status migraphx_shape_elements(size_t* out, const_migraphx_shape_t shape);
migraphx_status migraphx_shape_bytes(size_t* out, const_migraphx_shape_t shape); migraphx_status migraphx_shape_bytes(size_t* out, const_migraphx_shape_t shape);
migraphx_status migraphx_shape_ndim(size_t* out, const_migraphx_shape_t shape);
migraphx_status migraphx_status
migraphx_shape_equal(bool* out, const_migraphx_shape_t shape, const_migraphx_shape_t x); migraphx_shape_equal(bool* out, const_migraphx_shape_t shape, const_migraphx_shape_t x);
migraphx_status migraphx_shape_standard(bool* out, const_migraphx_shape_t shape); migraphx_status migraphx_shape_standard(bool* out, const_migraphx_shape_t shape);
migraphx_status migraphx_shape_dynamic(bool* out, const_migraphx_shape_t shape);
migraphx_status migraphx_shape_index(size_t* out, const_migraphx_shape_t shape, size_t i); migraphx_status migraphx_shape_index(size_t* out, const_migraphx_shape_t shape, size_t i);
migraphx_status migraphx_argument_destroy(migraphx_argument_t argument); migraphx_status migraphx_argument_destroy(migraphx_argument_t argument);
...@@ -203,6 +272,9 @@ migraphx_status migraphx_argument_assign_to(migraphx_argument_t output, ...@@ -203,6 +272,9 @@ migraphx_status migraphx_argument_assign_to(migraphx_argument_t output,
migraphx_status migraphx_status
migraphx_argument_create(migraphx_argument_t* argument, const_migraphx_shape_t shape, void* buffer); migraphx_argument_create(migraphx_argument_t* argument, const_migraphx_shape_t shape, void* buffer);
migraphx_status migraphx_argument_create_empty(migraphx_argument_t* argument,
const_migraphx_shape_t shape);
migraphx_status migraphx_argument_shape(const_migraphx_shape_t* out, migraphx_status migraphx_argument_shape(const_migraphx_shape_t* out,
const_migraphx_argument_t argument); const_migraphx_argument_t argument);
...@@ -397,9 +469,16 @@ migraphx_status migraphx_onnx_options_create(migraphx_onnx_options_t* onnx_optio ...@@ -397,9 +469,16 @@ migraphx_status migraphx_onnx_options_create(migraphx_onnx_options_t* onnx_optio
migraphx_status migraphx_onnx_options_set_input_parameter_shape( migraphx_status migraphx_onnx_options_set_input_parameter_shape(
migraphx_onnx_options_t onnx_options, const char* name, size_t* dims, size_t dims_size); migraphx_onnx_options_t onnx_options, const char* name, size_t* dims, size_t dims_size);
migraphx_status migraphx_onnx_options_set_dyn_input_parameter_shape(
migraphx_onnx_options_t onnx_options, const char* name, migraphx_dynamic_dimensions_t dims);
migraphx_status migraphx_onnx_options_set_default_dim_value(migraphx_onnx_options_t onnx_options, migraphx_status migraphx_onnx_options_set_default_dim_value(migraphx_onnx_options_t onnx_options,
size_t value); size_t value);
migraphx_status
migraphx_onnx_options_set_default_dyn_dim_value(migraphx_onnx_options_t onnx_options,
const_migraphx_dynamic_dimension_t dd);
migraphx_status migraphx_status
migraphx_onnx_options_set_default_loop_iterations(migraphx_onnx_options_t onnx_options, migraphx_onnx_options_set_default_loop_iterations(migraphx_onnx_options_t onnx_options,
int64_t value); int64_t value);
......
...@@ -571,10 +571,90 @@ using require_interface = ...@@ -571,10 +571,90 @@ using require_interface =
// NOLINTNEXTLINE // NOLINTNEXTLINE
#define MIGRAPHX_CONST_HANDLE_BASE(name) MIGRAPHX_DETAIL_HANDLE_BASE(name, const) #define MIGRAPHX_CONST_HANDLE_BASE(name) MIGRAPHX_DETAIL_HANDLE_BASE(name, const)
/**
* Container to hold optimal dynamic dimension values.
*/
struct optimals : MIGRAPHX_HANDLE_BASE(optimals)
{
MIGRAPHX_HANDLE_CONSTRUCTOR(optimals)
optimals(std::initializer_list<size_t> init_list)
{
this->make_handle(&migraphx_optimals_create, init_list.begin(), init_list.size());
}
};
/**
* @brief Dynamic dimension object.
* @details minimum, maximum, and optimal dimensions
*/
struct dynamic_dimension : MIGRAPHX_CONST_HANDLE_BASE(dynamic_dimension)
{
MIGRAPHX_HANDLE_CONSTRUCTOR(dynamic_dimension)
dynamic_dimension(size_t min, size_t max)
{
this->make_handle(&migraphx_dynamic_dimension_create_min_max, min, max);
}
dynamic_dimension(size_t min, size_t max, const optimals& opts)
{
this->make_handle(
&migraphx_dynamic_dimension_create_min_max_optimals, min, max, opts.get_handle_ptr());
}
bool is_fixed() const
{
bool result = false;
call(&migraphx_dynamic_dimension_is_fixed, &result, this->get_handle_ptr());
return result;
}
friend bool operator==(const dynamic_dimension& x, const dynamic_dimension& y)
{
bool pout;
call(&migraphx_dynamic_dimension_equal, &pout, x.get_handle_ptr(), y.get_handle_ptr());
return pout;
}
friend bool operator!=(const dynamic_dimension& x, const dynamic_dimension& y)
{
return not(x == y);
}
};
/**
* Container to hold dynamic_dimension objects.
*/
struct dynamic_dimensions : MIGRAPHX_HANDLE_BASE(dynamic_dimensions)
{
MIGRAPHX_HANDLE_CONSTRUCTOR(dynamic_dimensions)
template <class... Ts>
dynamic_dimensions(Ts... xs)
{
std::array<const_migraphx_dynamic_dimension_t, sizeof...(Ts)> a{xs.get_handle_ptr()...};
this->make_handle(&migraphx_dynamic_dimensions_create, a.data(), a.size());
}
size_t size() const
{
size_t pout;
call(&migraphx_dynamic_dimensions_size, &pout, this->get_handle_ptr());
return pout;
}
dynamic_dimension operator[](size_t pidx) const
{
const_migraphx_dynamic_dimension_t pout;
call(&migraphx_dynamic_dimensions_get, &pout, this->get_handle_ptr(), pidx);
return {pout, this->share_handle()};
}
};
/** /**
* @brief Describe shape of tensor * @brief Describe shape of tensor
* @details A shape consists of a data type, lengths of multi-dimension tensor, and strides * @details A shape consists of a data type, lengths of multi-dimension tensor, and strides
*
*/ */
struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape) struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape)
{ {
...@@ -598,6 +678,13 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape) ...@@ -598,6 +678,13 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape)
this->make_handle(&migraphx_shape_create, type, plengths.data(), plengths.size()); this->make_handle(&migraphx_shape_create, type, plengths.data(), plengths.size());
} }
// Force all calls of the format `shape( type_t, { size_t compatibles } )` to map to
// shape(type_t, std::vector<std::size_t> l)
shape(migraphx_shape_datatype_t t, std::initializer_list<std::size_t> d)
: shape::shape(t, std::vector<std::size_t>{d.begin(), d.end()})
{
}
shape(migraphx_shape_datatype_t type, shape(migraphx_shape_datatype_t type,
std::vector<size_t> plengths, std::vector<size_t> plengths,
std::vector<size_t> pstrides) std::vector<size_t> pstrides)
...@@ -610,6 +697,11 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape) ...@@ -610,6 +697,11 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape)
pstrides.size()); pstrides.size());
} }
shape(migraphx_shape_datatype_t type, const dynamic_dimensions& dyn_dims)
{
this->make_handle(&migraphx_shape_create_dynamic, type, dyn_dims.get_handle_ptr());
}
std::vector<size_t> lengths() const std::vector<size_t> lengths() const
{ {
const size_t* pout; const size_t* pout;
...@@ -626,6 +718,14 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape) ...@@ -626,6 +718,14 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape)
return {pout, pout + pout_size}; return {pout, pout + pout_size};
} }
/// Get the dynamic dimensions of the shape
dynamic_dimensions dyn_dims() const
{
migraphx_dynamic_dimensions_t pout;
call(&migraphx_shape_dyn_dims, &pout, this->get_handle_ptr());
return {pout, own{}};
}
migraphx_shape_datatype_t type() const migraphx_shape_datatype_t type() const
{ {
migraphx_shape_datatype_t pout; migraphx_shape_datatype_t pout;
...@@ -654,6 +754,14 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape) ...@@ -654,6 +754,14 @@ struct shape : MIGRAPHX_CONST_HANDLE_BASE(shape)
return result; return result;
} }
/// Is the shape dynamic
bool dynamic() const
{
bool result = false;
call(&migraphx_shape_dynamic, &result, this->get_handle_ptr());
return result;
}
// map element index to space index // map element index to space index
size_t index(size_t i) const size_t index(size_t i) const
{ {
...@@ -687,6 +795,11 @@ struct argument : MIGRAPHX_CONST_HANDLE_BASE(argument) ...@@ -687,6 +795,11 @@ struct argument : MIGRAPHX_CONST_HANDLE_BASE(argument)
MIGRAPHX_DEPRECATED("Contructor without lifetime annotation is deprecated.") MIGRAPHX_DEPRECATED("Contructor without lifetime annotation is deprecated.")
argument(const migraphx_argument* p) { this->set_handle(p, borrow{}); } argument(const migraphx_argument* p) { this->set_handle(p, borrow{}); }
argument(shape pshape)
{
this->make_handle(&migraphx_argument_create_empty, pshape.get_handle_ptr());
}
argument(shape pshape, void* pbuffer) argument(shape pshape, void* pbuffer)
{ {
this->make_handle(&migraphx_argument_create, pshape.get_handle_ptr(), pbuffer); this->make_handle(&migraphx_argument_create, pshape.get_handle_ptr(), pbuffer);
...@@ -1182,12 +1295,27 @@ struct onnx_options : MIGRAPHX_HANDLE_BASE(onnx_options) ...@@ -1182,12 +1295,27 @@ struct onnx_options : MIGRAPHX_HANDLE_BASE(onnx_options)
dim.size()); dim.size());
} }
void set_dyn_input_parameter_shape(const std::string& name, const dynamic_dimensions& dyn_dims)
{
call(&migraphx_onnx_options_set_dyn_input_parameter_shape,
this->get_handle_ptr(),
name.c_str(),
dyn_dims.get_handle_ptr());
}
/// When there is a dimension parameter, then use this default value /// When there is a dimension parameter, then use this default value
void set_default_dim_value(unsigned int value) void set_default_dim_value(unsigned int value)
{ {
call(&migraphx_onnx_options_set_default_dim_value, this->get_handle_ptr(), value); call(&migraphx_onnx_options_set_default_dim_value, this->get_handle_ptr(), value);
} }
void set_default_dyn_dim_value(const dynamic_dimension& dd)
{
call(&migraphx_onnx_options_set_default_dyn_dim_value,
this->get_handle_ptr(),
dd.get_handle_ptr());
}
/// Set default max iteration number for the loop operator /// Set default max iteration number for the loop operator
void set_default_loop_iterations(int64_t value) void set_default_loop_iterations(int64_t value)
{ {
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment