k230 kmodel结构问题

Viewed 446

我将官方的yolo11n.py转换成onnx再转成kmodel,再使用官方示例的mircopython代码进行运行代码如下:摄像头可以调用,但是框框没有显示出来;
我是一位初学者,想请教一下各位大佬:
1.我这个流程如何检测kmodel是否转换成功?
2.如果成功我应该如何看kmodel的输出是什么格式的?(因为onnx有netron可视化,kmodel有没有可以看他结构的代码?)
3.我应该如何修改后处理部分,以及显示部分让他能成功显示框框,我需要修改哪里?
4.有没有相关官方文档教学以及相关帮助文档?(因为我看官方的文档很多,但是很杂,有些用mircopython,有些用命令行,有些用Linux,有些是sdk,又有些是canmv,有些提了一点ai的部分,有些用yolo新的库,又有些没用,搞得我现在很乱)

from libs.PipeLine import PipeLine, ScopedTiming
from libs.YOLO import YOLO11
import os,sys,gc
import ulab.numpy as np
import image


if __name__=="__main__":
    # 显示模式,默认"hdmi",可以选择"hdmi"和"lcd"
    display_mode="hdmi"
    rgb888p_size=[320,320]
    if display_mode=="hdmi":
        display_size=[1920,1080]
    else:
        display_size=[800,480]
    kmodel_path="/data/test.kmodel"
    labels = ["person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","traffic", "light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie","suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]
    confidence_threshold = 0.5
    nms_threshold=0.45
    mask_threshold=0.5
    model_input_size=[640,640]
    # 初始化PipeLine
    pl=PipeLine(rgb888p_size=rgb888p_size,display_size=display_size,display_mode=display_mode)
    pl.create()
    # 初始化YOLO11实例
    yolo=YOLO11(task_type="detect",mode="video",kmodel_path=kmodel_path,labels=labels,rgb888p_size=rgb888p_size,model_input_size=model_input_size,display_size=display_size,conf_thresh=confidence_threshold,nms_thresh=nms_threshold,mask_thresh=mask_threshold,max_boxes_num=50,debug_mode=0)
    yolo.config_preprocess()
    try:
        while True:
            os.exitpoint()
            with ScopedTiming("total",1):
                # 逐帧推理
                img=pl.get_frame()
                res=yolo.run(img)
                yolo.draw_result(res,pl.osd_img)
                pl.show_image()
                gc.collect()
    except Exception as e:
        sys.print_exception(e)
    finally:
        yolo.deinit()
        pl.destroy()
1 Answers

1、kmodel是否转换成功可以参考第10章,进行相似度验证:https://developer.canaan-creative.com/k230_canmv/zh/dev/zh/example/ai/YOLO%E5%A4%A7%E4%BD%9C%E6%88%98.html#id94
2、kmodel无法可视化,只能通过输出相似度验证是否转换成功,但是一般来说问题不大;
3、没有显示框可能输入不合理或者是阈值较高:

  • 将rgb888p_size改为[1280,720],这里不是模型的输入分辨率,而是获取图像的分辨率,model_input_size才是模型输入的分辨率;
  • 您可以将confidence_threshold改为0.2试试,看是否有框;
  • 转换模型时将ptq_option的参数改为1,重新转换模型测试;

4、针对不同的软件提供了不同的文档,如果只使用MicroPython请参考CanMV K230文档:https://developer.canaan-creative.com/k230_canmv/zh/dev/index.html
其他文档分别针对双系统linux+rt-smart、纯rt-smart、纯linux系统的,以及ai开发的综合文档。请您按需选择不同的文档进行参考。

我写个新帖子把代码粘上去

这个就是你上边提到的calib_data,需要校正数据,从训练数据中取就可以,是必须的,你可以用提供的test尝试

感谢大佬的评论,我经过您说的rgb888p_size改为[1280,720],confidence_threshold = 0.2(又改成0.1)后屏幕有一瞬间有出现一个框框,但框框框满了整个屏幕,并未正确识别物品