#include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include namespace migraphx { inline namespace MIGRAPHX_INLINE_NS { namespace gpu { struct miopen_apply { program* prog = nullptr; context ctx{}; std::unordered_map> apply_map{}; instruction_ref last{}; void check_shape(shape x, instruction_ref i) { assert(x == i->get_shape()); (void)x; (void)i; } void init() { this->last = instruction::get_output_alias(std::prev(prog->end())); add_miopen_simple_op("abs", make_abs); add_miopen_extend_op("leaky_relu", make_leaky_relu); add_miopen_extend_op("elu", make_elu); add_generic_op("add"); add_generic_op("sub"); add_generic_op("exp"); add_generic_op("erf"); add_generic_op("log"); add_generic_op("sin"); add_generic_op("cos"); add_generic_op("tan"); add_generic_op("sinh"); add_generic_op("cosh"); add_generic_op("tanh"); add_generic_op("asin"); add_generic_op("acos"); add_generic_op("atan"); add_generic_op("sqrt"); add_generic_op("mul"); add_generic_op("div"); add_generic_op("max"); add_generic_op("min"); add_generic_op("rsqrt"); add_generic_op("round"); add_generic_op("pow"); add_generic_op("sqdiff"); add_generic_op("relu"); add_generic_op("sign"); add_generic_op("sigmoid"); add_extend_op("dot"); add_extend_op("quant_dot"); add_extend_op("contiguous"); add_extend_op("concat"); add_extend_op("softmax"); add_extend_op("logsoftmax"); add_extend_op("argmax"); add_extend_op("argmin"); add_extend_op("gather"); add_extend_op("pad"); add_extend_op("convert"); add_extend_op("clip"); add_extend_op("reduce_sum"); add_extend_op("reduce_mean"); add_lrn_op(); add_convolution_op(); add_quant_convolution_op(); // add_quant_dot_op(); add_pooling_op(); add_batch_norm_inference_op(); } void apply() { init(); for(auto it = prog->begin(); it != prog->end(); it++) { auto s = it->get_shape(); if(apply_map.count(it->name()) > 0) { check_shape(s, apply_map.at(it->name())(it)); } } } instruction_ref insert_allocation(instruction_ref ins, const shape& s, std::string tag = "") { if(ins == last and tag.empty()) { return prog->add_parameter("output", s); } else { auto result = prog->insert_instruction(ins, hip_allocate{s, std::move(tag)}); return result; } } void add_convolution_op() { apply_map.emplace("convolution", [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto conv = miopen_convolution{op, make_conv(op)}; auto ws = conv.compile(ctx, ins->get_shape(), to_shapes(ins->inputs())); auto workspace = insert_allocation(ins, ws, "workspace"); auto output = insert_allocation(ins, ins->get_shape()); return prog->replace_instruction( ins, conv, ins->inputs().at(0), ins->inputs().at(1), workspace, output); }); } void add_quant_convolution_op() { apply_map.emplace("quant_convolution", [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto conv = miopen_quant_convolution{op, make_conv(op)}; auto ws = conv.compile(ctx, ins->get_shape(), to_shapes(ins->inputs())); auto args = ins->inputs(); auto workspace = insert_allocation(ins, ws, "workspace"); auto output = insert_allocation(ins, ins->get_shape()); return prog->replace_instruction(ins, conv, args[0], args[1], workspace, output); }); } void add_pooling_op() { apply_map.emplace("pooling", [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto pd = make_pooling(op); auto output = insert_allocation(ins, ins->get_shape()); return prog->replace_instruction( ins, miopen_pooling{op, std::move(pd)}, ins->inputs().at(0), output); }); } void add_lrn_op() { apply_map.emplace("lrn", [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto ldesc = make_lrn(op); auto output = insert_allocation(ins, ins->get_shape()); return prog->replace_instruction( ins, miopen_lrn{std::move(ldesc)}, ins->inputs().at(0), output); }); } template void add_generic_op(std::string name) { apply_map.emplace(name, [=](instruction_ref ins) { auto output = insert_allocation(ins, ins->get_shape()); std::vector refs = ins->inputs(); refs.push_back(output); return prog->replace_instruction(ins, T{}, refs); }); } template void add_extend_op(std::string name) { apply_map.emplace(name, [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto output = insert_allocation(ins, ins->get_shape()); std::vector refs = ins->inputs(); refs.push_back(output); return prog->replace_instruction(ins, T{op}, refs); }); } template void add_miopen_extend_op(std::string name, F f) { apply_map.emplace(name, [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto ad = f(op.alpha); auto output = insert_allocation(ins, ins->get_shape()); return prog->replace_instruction(ins, T{std::move(ad)}, ins->inputs().at(0), output); }); } template void add_miopen_simple_op(std::string name, F f) { apply_map.emplace(name, [=](instruction_ref ins) { auto ad = f(); auto output = insert_allocation(ins, ins->get_shape()); return prog->replace_instruction(ins, T{std::move(ad)}, ins->inputs().at(0), output); }); } void add_batch_norm_inference_op() { apply_map.emplace("batch_norm_inference", [=](instruction_ref ins) { auto&& op = any_cast(ins->get_operator()); auto output = insert_allocation(ins, ins->get_shape()); shape old_shape = ins->inputs().at(1)->get_shape(); std::vector new_shape{1, static_cast(old_shape.elements()), 1, 1}; auto reshape_op = op::reshape{new_shape}; std::vector reshapes; std::transform(ins->inputs().begin() + 1, ins->inputs().end(), std::back_inserter(reshapes), [&](auto i) { return prog->insert_instruction(ins, reshape_op, i); }); return prog->replace_instruction(ins, miopen_batch_norm_inference{op}, ins->inputs().at(0), reshapes[0], reshapes[1], reshapes[2], reshapes[3], output); }); } }; void lowering::apply(program& p) const { miopen_apply{&p, ctx}.apply(); } } // namespace gpu } // namespace MIGRAPHX_INLINE_NS } // namespace migraphx