- 17 Apr, 2023 17 commits
-
-
Ted Themistokleous authored
Already installed via install_prereqs.sh for libtbb-dev
-
Ted Themistokleous authored
Allows us to continually filter out the top value as a pop when performing the copy_if just an index after.
-
Ted Themistokleous authored
Offload these calculations when the batch box is created since we're now copying by value, no need to recalculate these parameters. Reduces the work repeated for the top_box selected but still leverages parallelism for each subsequent box compared as our lambda in copy_if calls batch_box() prior to suppress_by_iou
-
Ted Themistokleous authored
Make copies here since we're doing this calc in parallel
-
Ted Themistokleous authored
Less code, simple to read.
-
Ted Themistokleous authored
need to support std::execution::par used for parallel computation support.
-
Ted Themistokleous authored
This reverts commit aa91c4db7551ad69b6141597483d7c980d40d466.
-
Ted Themistokleous authored
- Add support for TBB in MIGraphX - Add include for TBB in DockerFile - Replace inner loop with copy_if and use std::execution:par to filter - Change heap to vector and sort in parallel in filter_boxes_per_score() With the help of Paul this cuts down NMS in ref from around 43-44s to about 2s
-
Ted Themistokleous authored
This cleans up the compute_nms signature as well as stops using additional memory by not storing every pair result twice that just gets cleared per run each shape_for_each()
-
Ted Themistokleous authored
Allows us to transform to get the proper input then spawn a thread to call f() in a threaded fashion. Useful if we have many batches/classes for our runs.
-
Ted Themistokleous authored
This avoids us performing N comparisons for the given batch if the score threshold used is less than zero. This allows us to simply just std::transform all boxes without needing to perform a bunch of needles compares and use constructs a std::pair of box score, idx directly.
-
Ted Themistokleous authored
Remove the need to use gpu, switch this to ref. change names to reflect static vs random data
-
Ted Themistokleous authored
In this case we have a batch size with no bound on the score threshold. We end up evaluating a single huge batch on its own. The concern here is this should just all the way through without completely stalling or intractably running in a single thread fashion currently.
-
Ted Themistokleous authored
This saves us two copies of the entire box class to this call and instead works on reference of these objects that are created within the loops instead
-
Ted Themistokleous authored
We're continually creating/destroying batch box in the while() check as we run through the boxes_heap() by calling batch_box() constantly. Make this next_box and only calculate it before we pop that box from the boxes_heap. should get rid of function overhead of constant calls in the case of a large batch size
-
Ted Themistokleous authored
Just quickly return if either boxes have zero area. Searching for intersection and union is irrelevant here logically.
-
shivadbhavsar authored
Expose the shape::type_t values to be used by the python api and is required by torch_migraphx to support torchbench models.
-
- 13 Apr, 2023 1 commit
-
-
Zhuoran Yin authored
-
- 12 Apr, 2023 3 commits
-
-
Paul Fultz II authored
-
Paul Fultz II authored
This removes the --cxx flags from the rbuild commands since it is not necessary. Also added a section about using rbuild to set up an environment for development.
-
Djordje Petrovic authored
-
- 11 Apr, 2023 3 commits
-
-
github-actions[bot] authored
-
Paul Fultz II authored
-
Ted Themistokleous authored
-
- 10 Apr, 2023 3 commits
-
-
Umang Yadav authored
-
Charlie Lin authored
Adds a matcher to split_single_dyn_dim to find all broadcast or multibroadcast with two static shape inputs and replaces the instruction with the one input version. Sorts the get_output_parameters() list to ensure the correct ordering. (Was getting an error for some models.)
-
Paul Fultz II authored
-
- 09 Apr, 2023 1 commit
-
-
Paul Fultz II authored
* Enable hiprtc by default
-
- 07 Apr, 2023 1 commit
-
-
Paul Fultz II authored
Converts can be inserted when the scales and input differ in the onnx file(we are already doing this implicit conversion in the ref implementation). This will also improve the compile-time of quantizelinear.hpp since we can remove the nested visit method.
-
- 06 Apr, 2023 2 commits
-
-
Charlie Lin authored
Examples.. bin/driver verify /codes/onnx_models/resnet50-v1-7/resnet50-v1-7.onnx --split-single-dyn-dim --batch 3 --dyn-input-dim @data "[{min:1, max:4}, 3, 224, 224]" bin/driver compile /codes/onnx_models/resnet50-v1-7/resnet50-v1-7.onnx --split-single-dyn-dim --default-dyn-dim "{min:1, max:10}" --output resnet50_batch1-10.mxr bin/driver perf resnet50_batch1-10.mxr --batch 4 -
Paul Fultz II authored
Automatically fuse multiple reductions and pointwise operations.
-
- 05 Apr, 2023 3 commits
-
-
Paul Fultz II authored
* Add MIGRAPHX_VALIDATE_MATCHES env variable to validate each matcher
-
Paul Fultz II authored
This will replace conv(x+a, w) with conv(x, w) + conv(a, w) where a is a constant so conv(a, w) can be replaced with a constant.
-
Paul Fultz II authored
-
- 04 Apr, 2023 2 commits
-
-
shivadbhavsar authored
Bug found due to failing torch benchmark. Added test case to reproduce issue causing the model to error out on compile. Original logic results in the following error: AMDMIGraphX/src/include/migraphx/op/unsqueeze.hpp:128: normalize_compute_shape: UNSQUEEZE: Axis dimenstion is not divisible by step
-
Charlie Lin authored
Makes the optimals into a std::set<std::size_t> Changes shape object functions to handle the opts change Changes to convolution, flatten, pooling, and convolution in that they no longer calculate the output optimal dimensions. Instead returns empty opts. Will need to change this in the future if we want to support dynamic shapes fully. Many changes to tests and shape calls with respect to the new optimals
-
- 03 Apr, 2023 2 commits
-
-
shivadbhavsar authored
-
Charlie Lin authored
Adds the promote_literals compiler pass that moves literals from the submodules to the main module. With the eliminate_common_subexpression pass, it will remove copies of literals created during split_single_dyn_dim. Pass is enabled with the split_single_dyn_dim compile option.
-
- 01 Apr, 2023 1 commit
-
-
Umang Yadav authored
-
- 31 Mar, 2023 1 commit
-
-
Charlie Lin authored
Adds a new GPU compiler pass split_single_dyn_dim that handles when one input parameter has a single non-fixed dynamic_dimension. commonly occurs for dynamic batch or BERT sequence length Splits the dynamic shape into several submodules will static input parameters to handle all of the cases in the dynamic_dimension range. Essentially does what I manually did for the select_module verify tests Adds a compile option split_single_dyn_dim that toggles the pass on/off. Defaults to false. Updates verify_program.hpp and run_verify.cpp to allow for the tests to change the compile_options
-