k230 nncase广播乘法支持

Viewed 66

重现步骤

尝试在k230上部署图像增强模型时,模型中有SEBlock等二维注意力结构,需要将(N,C,1,1)广播乘法到(N,C,H,W)的张量上,转换为onnx后,在kmodel编译过程中会抛出错误:

(assert(ow <= 64 && oh <= 16 && oh * ow <= 256) error!
 File "/home/gitlab-runner/builds/zaC7hZ1H/1/maix2-ai-sw/k510-gnne-compiler/modules/Nncase.Modules.K230/Transform/Rules/Tile/TilePdp1.cs", line 308 .)

具体问题
k230 nncase是否有能够支持更大的广播尺寸?
模型编译和模拟器运行时能否打印一些调试信息以便确认模型具体出错位置?
以及是否有详细的算子支持手册,以便根据算子规格限制设计适用于kpu的模型?
另:模型后处理框架能否增加数据格式转换,以便图像增强模型输出,目前通过RVV实现进行优化,较为繁琐
软硬件版本信息

错误日志

Unhandled exception. System.AggregateException: One or more errors occurred. (assert(ow <= 64 && oh <= 16 && oh * ow <= 256) error!
 File "/home/gitlab-runner/builds/zaC7hZ1H/1/maix2-ai-sw/k510-gnne-compiler/modules/Nncase.Modules.K230/Transform/Rules/Tile/TilePdp1.cs", line 308 .)
 ---> System.InvalidOperationException: assert(ow <= 64 && oh <= 16 && oh * ow <= 256) error!
 File "/home/gitlab-runner/builds/zaC7hZ1H/1/maix2-ai-sw/k510-gnne-compiler/modules/Nncase.Modules.K230/Transform/Rules/Tile/TilePdp1.cs", line 308 .
   at Nncase.Passes.Rules.K230.TileUtilities.Assert(Boolean v, String vStr, String path, Int32 line)
   at Nncase.Passes.Rules.K230.TilePdp1.SearchGlbParameters(Call ld, Call st, Call pdp)
   at Nncase.Passes.Rules.K230.TilePdp1.GetReplace(Call call, GNNEPdp1 callOp, Call ld, Call st)
   at Nncase.Passes.Rules.K230.TilePdp1.GetReplace(IMatchResult __result, RunPassContext __context)
   at Nncase.Passes.Rules.Tile.K230FusionConvertVisitor.Process(Fusion fusion)
   at Nncase.Passes.Rules.Tile.K230FusionConvertVisitor.RewriteLeafFusion(Fusion expr)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.Rewrite(Expr expr, TContext context)
   at Nncase.Passes.Rules.Tile.CheckedConvertMutator.RewriteLeafFusion(Fusion expr)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitCall(Call expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitTuple(Tuple expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.VisitOperands(Expr expr, TContext context)
   at Nncase.IR.ExprVisitor`3.VisitFunction(Function expr, TContext context)
   at Nncase.IR.ExprVisitor`3.DispatchVisit(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter`1.Rewrite(Expr expr, TContext context)
   at Nncase.IR.ExprRewriter.Rewrite(Expr expr)
   at Nncase.Passes.Rules.Tile.K230FusionToTirPass.RunCoreAsync(IRModule module, RunPassContext options)
   at Nncase.Passes.Pass`2.RunAsync(TInput input, RunPassContext context)
   at Nncase.Passes.PassManager.ModulePassGroup.RunAsync(IRModule module)
   at Nncase.Passes.PassManager.RunAsync(IRModule module)
   at Nncase.Compiler.Compiler.RunPassAsync(Action`1 register, String name, IProgress`1 progress, CancellationToken token)
   at Nncase.Compiler.Compiler.CompileAsync(IProgress`1 progress, CancellationToken token)
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at Nncase.Compiler.Interop.CApi.CompilerCompile(IntPtr compilerHandle)
Aborted (core dumped)

补充材料
此处给出一段pytorch的SEBlock类

class SEBlock(nn.Module):
    def __init__(self, input_channels, reduction_ratio=16):
        super(SEBlock, self).__init__()
        self.pool = nn.AdaptiveAvgPool2d(1)
        self.fc1 = nn.Linear(input_channels, input_channels // reduction_ratio)
        self.fc2 = nn.Linear(input_channels // reduction_ratio, input_channels)
        self._init_weights()

    def forward(self, x):
        batch_size, num_channels, _, _ = x.size()
        y = self.pool(x).reshape(batch_size, num_channels)
        y = F.relu(self.fc1(y))
        y = torch.tanh(self.fc2(y))
        y = y.reshape(batch_size, num_channels, 1, 1)
        return x * y
1 Answers
  1. k230 nncase是否有能够支持更大的广播尺寸?
    A:你这个是AdaptiveAvgPool2d引起的问题, 输入太大了, 暂时属于没支持的状态
  2. 模型编译和模拟器运行时能否打印一些调试信息以便确认模型具体出错位置?
    A:未来可以考虑
  3. 以及是否有详细的算子支持手册,以便根据算子规格限制设计适用于kpu的模型?
    A:算子支持可以去nncase repo下看, KPU算子设计规格这个确实是个有用的feature, 后面可以考虑提供出来