#include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include namespace migraphx { inline namespace MIGRAPHX_INLINE_NS { namespace cpu { template T zero(const T&) { return T(0); } template typename std::conditional_t{}, std::make_signed, std::enable_if>:: type make_signed(T x) { return x; } struct cpu_im2col { op::im2col op; template static auto reflect(Self& self, F f) { return migraphx::reflect(self.op, f); } static std::string name() { return "cpu::im2col"; } shape compute_shape(const std::vector& inputs) const { return op.normalize_compute_shape(inputs); } argument compute(context&, const shape& output_shape, std::vector args) const { argument result{output_shape}; auto input_shape = args[0].get_shape(); auto weights_shape = args[1].get_shape(); visit_all(result, args[0])([&](auto col, auto input) { const std::size_t& height = input_shape.lens()[2]; const std::size_t& width = input_shape.lens()[3]; const std::size_t& channels = weights_shape.lens()[1]; const std::size_t& kernel_h = weights_shape.lens()[2]; const std::size_t& kernel_w = weights_shape.lens()[3]; const std::size_t& pad_h = op.padding[0]; const std::size_t& pad_w = op.padding[1]; const std::size_t& stride_h = op.stride[0]; const std::size_t& stride_w = op.stride[1]; long kdiv2_h = long(kernel_h) / 2; long kdiv2_w = long(kernel_w) / 2; // calculate output sizes const std::size_t col_height = (height - kernel_h + 2 * pad_h) / stride_h + 1; const std::size_t col_width = (width - kernel_w + 2 * pad_w) / stride_w + 1; // account for padding for the starting position of the input pixels long iinput = kdiv2_h - long(pad_h); // loop over output pixels (ioutput, joutput) for(std::size_t ioutput = 0; ioutput < col_height; ioutput++, iinput += stride_h) { long jinput = kdiv2_w - long(pad_w); for(std::size_t joutput = 0; joutput < col_width; joutput++, jinput += stride_w) { // compute linear index for output std::size_t ldx = ioutput * col_width + joutput; std::size_t p = 0; dfor(channels, kernel_h, kernel_w)([&](std::size_t c, std::size_t koffset, std::size_t loffset) { auto idx = iinput + long(koffset) - kdiv2_h; auto jdx = jinput + long(loffset) - kdiv2_w; col(ldx, p) = ((idx >= 0) && (idx < height) && (jdx >= 0) && (jdx < width)) ? input(0, c, idx, jdx) : 0; p++; }); } } }); return result; } }; MIGRAPHX_REGISTER_OP(cpu_im2col) struct cpu_op { operation op = op::identity{}; template static auto reflect(Self& self, F f) { return migraphx::reflect(self.op, f); } std::string name() const { return "cpu::op"; } shape compute_shape(const std::vector& inputs) const { return op.compute_shape(inputs); } argument compute(context&, const shape& output_shape, const std::vector& args) const { return op.compute(output_shape, args); } value to_value() const { value v; v["name"] = op.name(); v["operator"] = op.to_value(); return v; } void from_value(const value& v) { op = make_op(v.at("name").to(), v.at("operator")); } friend std::ostream& operator<<(std::ostream& os, const cpu_op& x) { os << "cpu::" << x.op; return os; } }; MIGRAPHX_REGISTER_OP(cpu_op) struct cpu_pad { op::pad op; template static auto reflect(Self& self, F f) { return migraphx::reflect(self.op, f); } std::string name() const { return "cpu::pad"; } shape compute_shape(const std::vector& inputs) const { return op.compute_shape(inputs); } argument compute(context&, const shape& output_shape, std::vector args) const { assert(output_shape.standard()); argument result{output_shape}; result.visit([&](auto output) { using type = typename decltype(output)::value_type; std::fill(output.begin(), output.end(), pad_clamp(op.value)); }); visit_all(result, args[0])([&](auto output, auto input) { shape_for_each(input.get_shape(), [&](const auto& idx) { std::vector new_idx(idx.size()); std::transform( idx.begin(), idx.end(), op.pads.begin(), new_idx.begin(), [](auto i, auto j) { return i + j; }); output(new_idx.begin(), new_idx.end()) = input(idx.begin(), idx.end()); }); }); return result; } }; MIGRAPHX_REGISTER_OP(cpu_pad) struct leaky_relu_op { op::leaky_relu op; std::string name() const { return "cpu::leaky_relu"; } auto fcn() const { auto a = op.alpha; return [a](auto x) { return x > 0 ? x : x * a; }; } }; template struct cpu_unary2 : auto_register_op> { cpu_unary2() = default; template cpu_unary2(T pop) : op(Op{std::move(pop)}) { } Op op; template static auto reflect(Self& self, F f) { return migraphx::reflect(self.op.op, f); } std::string name() const { return op.name(); } shape compute_shape(const std::vector& inputs) const { check_shapes{inputs, *this}.has(1); const auto& s = inputs.at(0); return {s.type(), s.lens()}; } argument compute(context&, const shape& output_shape, std::vector args) const { argument result{output_shape}; visit_all(result, args[0])([&](auto output, auto input) { assert(input.get_shape().standard()); std::transform(input.begin(), input.end(), output.begin(), op.fcn()); }); return result; } }; template struct cpu_unary2; struct cpu_rnn_var_sl_last_output { op::rnn_var_sl_last_output op; template static auto reflect(Self& self, F f) { return migraphx::reflect(self.op, f); } std::string name() const { return "cpu::rnn_var_sl_last_output"; } shape compute_shape(std::vector inputs) const { return op.compute_shape(std::move(inputs)); } argument compute(const shape& output_shape, std::vector args) const { argument result{output_shape}; auto out_comp_lens = args[0].get_shape().lens(); out_comp_lens[0] = 1; shape out_comp_s{output_shape.type(), out_comp_lens}; visit_all(result, args[0])([&](auto output, auto input) { args[1].visit([&](auto seq_lens) { par_for(output_shape.elements(), [&](auto i) { auto idx = out_comp_s.multi(i); auto b = idx[2]; if(op.direction == op::rnn_direction::reverse or idx[1] == 1) { idx[0] = 0; } else { idx[0] = seq_lens[b] - 1; } output[i] = input(idx.begin(), idx.end()); }); }); }); return result; } }; MIGRAPHX_REGISTER_OP(cpu_rnn_var_sl_last_output) struct cpu_apply { module* modl; std::unordered_map> apply_map{}; std::unordered_map prog_output_names{}; instruction_ref last{}; void create_output_names() { this->last = instruction::get_output_alias(std::prev(modl->end())); if(this->last->name() == "@return") { const auto& prog_outputs = last->inputs(); std::vector outputs_alias(prog_outputs.size()); std::transform(prog_outputs.begin(), prog_outputs.end(), outputs_alias.begin(), [](const auto& i) { return instruction::get_output_alias(i); }); std::size_t index = 0; for(auto ins : outputs_alias) { prog_output_names[ins] = modl->name() + ":#output_" + std::to_string(index++); } } } void extend_op(const std::string& op_name, const std::string& cpu_name, bool allocate = true) { apply_map.emplace(op_name, [=](instruction_ref ins) { auto&& op = ins->get_operator(); if(allocate) return replace(ins, make_op(cpu_name, op.to_value())); return modl->replace_instruction(ins, make_op(cpu_name, op.to_value()), ins->inputs()); }); } void extend_dnnl_algos(const std::string& dnnl_name, const std::vector>& algos) { for(auto&& pp : algos) { std::string op_name = pp.first; std::string algo = pp.second; apply_map.emplace(op_name, [=](instruction_ref ins) { auto v = ins->get_operator().to_value(); if(not v.is_object()) return ins; v["algo"] = algo; auto op = make_op(dnnl_name, v); return replace(ins, op); }); } } template auto fuse_match(M matcher, const operation& op, const std::vector& bind_inputs) { return match::make_match_finder(matcher, [=](auto&, const auto& r) { auto ins = r.result; std::vector inputs; std::transform(bind_inputs.begin(), bind_inputs.end(), std::back_inserter(inputs), [&](const auto& s) { return r.instructions[s]; }); inputs.push_back(this->insert_allocation(ins, ins->get_shape())); modl->replace_instruction(ins, op, inputs); }); } void init() { create_output_names(); extend_dnnl_algos("dnnl::binary", { {"add", "binary_add"}, {"div", "binary_div"}, {"max", "binary_max"}, {"min", "binary_min"}, {"mul", "binary_mul"}, }); extend_dnnl_algos("dnnl::eltwise", { {"abs", "eltwise_abs"}, {"elu", "eltwise_elu"}, {"exp", "eltwise_exp"}, {"log", "eltwise_log"}, {"relu", "eltwise_relu"}, {"sqrt", "eltwise_sqrt"}, {"tanh", "eltwise_tanh"}, }); extend_dnnl_algos("dnnl::reduction", { {"reduce_max", "reduction_max"}, {"reduce_mean", "reduction_mean"}, {"reduce_min", "reduction_min"}, {"reduce_sum", "reduction_sum"}, }); extend_op("concat", "dnnl::concat"); extend_op("contiguous", "dnnl::reorder"); extend_op("convolution", "dnnl::convolution"); #ifndef MIGRAPHX_ENABLE_ZENDNN extend_op("deconvolution", "dnnl::deconvolution"); extend_op("dot", "dnnl::dot"); #endif extend_op("erf", "cpu::erf"); extend_op("gather", "cpu::gather"); extend_op("logsoftmax", "dnnl::logsoftmax"); extend_op("lrn", "dnnl::lrn"); extend_op("softmax", "dnnl::softmax"); extend_op("sub", "cpu::sub"); extend_op("im2col", "cpu::im2col", false); extend_op("leaky_relu", "cpu::leaky_relu", false); extend_op("pad", "cpu::pad", false); extend_op("rnn_var_sl_last_output", "cpu::rnn_var_sl_last_output", false); } void apply() { init(); // Apply fusion matchers first match::find_matches(*modl, fuse_match(match::gelu_erf(), make_op("dnnl::eltwise", {{"algo", "eltwise_gelu_erf"}}), {"x"}), fuse_match(match::gelu_tanh(), make_op("dnnl::eltwise", {{"algo", "eltwise_gelu_tanh"}}), {"x"}), fuse_match(match::layernorm(), make_op("dnnl::layernorm"), {"x"})); // Apply these operators first so the inputs can be const folded for(auto it : iterator_for(*modl)) { if(it->name() == "pow") { apply_pow(it); } } for(auto it : iterator_for(*modl)) { if(it->name() == "pooling") { apply_pooling(it); } else if(apply_map.count(it->name()) > 0) { apply_map.at(it->name())(it); } } } instruction_ref apply_pow(instruction_ref ins) const { auto beta = read_scalar(ins->inputs()[1]); if(beta.empty()) return ins; return replace(ins, make_op("dnnl::eltwise", {{"algo", "eltwise_pow"}, {"alpha", 1.0}, {"beta", beta.front()}}), {ins->inputs().front()}); } instruction_ref apply_pooling(instruction_ref ins) const { auto&& op = ins->get_operator(); auto v = op.to_value(); if(has_op("dnnl::pooling") and ins->get_shape().type() == shape::type_t::float_type and not v["ceil_mode"].to()) return replace(ins, make_op("dnnl::pooling", op.to_value())); return ins; } template static std::vector read_scalar(instruction_ref ins) { if(ins->name() == "contiguous") return read_scalar(ins->inputs().front()); if(ins->get_shape().elements() != 1 and not ins->get_shape().scalar()) return {}; auto r = ins->eval(); if(r.empty()) return {}; return {r.at()}; } instruction_ref replace(instruction_ref ins, const operation& op) const { return replace(ins, op, ins->inputs()); } instruction_ref replace(instruction_ref ins, const operation& op, std::vector inputs) const { inputs.push_back(insert_allocation(ins, ins->get_shape())); return modl->replace_instruction(ins, op, inputs); } instruction_ref insert_allocation(instruction_ref ins, const shape& s) const { return modl->insert_instruction(ins, make_op("cpu::allocate", {{"shape", to_value(s)}})); } }; void lowering::apply(module& m) const { cpu_apply{&m}.apply(); } } // namespace cpu } // namespace MIGRAPHX_INLINE_NS } // namespace migraphx