手撕coreML之yolov2 object detection物体检测(含源代码)

一些闲话:

  前面我有篇博客 https://www.cnblogs.com/riddick/p/10434339.html ,大致说了下如何将pytorch训练的.pth模型转换为mlmodel,部署在IOS端进行前向预测。只是介绍了下类接口,并没有示例,因此有可能会陷入没有demo你说个p的境地。因此,今天就拿实际的模型来说上一说。

  其实coreML的demo,github上有很多,但是大部分都是用swift写的,而对于从C/C++语言过来的同学来说,Objective-C或许会更容易看懂一些。所以这次就以yolov2实现的object detection为例,创建Objective-C工程并用真机调试,来实现前向预测(并且附源代码)。

  当然,为了偷懒起见,模型并不是我训练的,模型来自这里:https://github.com/syshen/YOLO-CoreML 。该仓库使用swift实现的,有兴趣的可以对比着看。yolov2的mlmodel模型文件,请看上面仓库的readMe中这句话: 

execute download.sh to download the pre-trained model % sh download.sh

闲话少说,进入正题:

一、创建xcode工程,选择编程语言为Objective-C。将模型添加到xcode工程中,我将模型名字改为yoloModel,并且量化到了16bit。当然使用原始模型200多MB也完全OK。

  

二、模型添加到工程后,会自动生成yoloModel类头文件,如下:

//
// yoloModel.h
//
// This file was automatically generated and should not be edited.
//

#import <Foundation/Foundation.h>
#import <CoreML/CoreML.h>
#include <stdint.h>

NS_ASSUME_NONNULL_BEGIN

/// Model Prediction Input Type
API_AVAILABLE(macos(10.13.2), ios(11.2), watchos(4.2), tvos(11.2)) __attribute__((visibility("hidden")))
@interface yoloModelInput : NSObject<MLFeatureProvider>

/// input__0 as color (kCVPixelFormatType_32BGRA) image buffer, 608 pixels wide by 608 pixels high
@property (readwrite, nonatomic) CVPixelBufferRef input__0;
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithInput__0:(CVPixelBufferRef)input__0;
@end

/// Model Prediction Output Type
API_AVAILABLE(macos(10.13.2), ios(11.2), watchos(4.2), tvos(11.2)) __attribute__((visibility("hidden")))
@interface yoloModelOutput : NSObject<MLFeatureProvider>

/// output__0 as 425 x 19 x 19 3-dimensional array of doubles
@property (readwrite, nonatomic, strong) MLMultiArray * output__0;
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithOutput__0:(MLMultiArray *)output__0;
@end

/// Class for model loading and prediction
API_AVAILABLE(macos(10.13.2), ios(11.2), watchos(4.2), tvos(11.2)) __attribute__((visibility("hidden")))
@interface yoloModel : NSObject
@property (readonly, nonatomic, nullable) MLModel * model;
- (nullable instancetype)init;
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url error:(NSError * _Nullable * _Nullable)error;
- (nullable instancetype)initWithConfiguration:(MLModelConfiguration *)configuration error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url configuration:(MLModelConfiguration *)configuration error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));

/**
    Make a prediction using the standard interface
    @param input an instance of yoloModelInput to predict from
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as yoloModelOutput
*/
- (nullable yoloModelOutput *)predictionFromFeatures:(yoloModelInput *)input error:(NSError * _Nullable * _Nullable)error;

/**
    Make a prediction using the standard interface
    @param input an instance of yoloModelInput to predict from
    @param options prediction options
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as yoloModelOutput
*/
- (nullable yoloModelOutput *)predictionFromFeatures:(yoloModelInput *)input options:(MLPredictionOptions *)options error:(NSError * _Nullable * _Nullable)error;

/**
    Make a prediction using the convenience interface
    @param input__0 as color (kCVPixelFormatType_32BGRA) image buffer, 608 pixels wide by 608 pixels high:
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as yoloModelOutput
*/
- (nullable yoloModelOutput *)predictionFromInput__0:(CVPixelBufferRef)input__0 error:(NSError * _Nullable * _Nullable)error;

/**
    Batch prediction
    @param inputArray array of yoloModelInput instances to obtain predictions from
    @param options prediction options
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the predictions as NSArray<yoloModelOutput *>
*/
- (nullable NSArray<yoloModelOutput *> *)predictionsFromInputs:(NSArray<yoloModelInput*> *)inputArray options:(MLPredictionOptions *)options error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));
@end

NS_ASSUME_NONNULL_END

  模型名称为yoloModel,那么自动生成的类头文件就是"yoloModel.h",生成的类名也叫 yoloModel。

  模型的输入名称为input_0,输出为output_0。那么自动生成的API接口就会带有input_0, output_0字段:举个栗子如下:

- (nullable yoloModelOutput *)predictionFromInput__0:(CVPixelBufferRef)input__0 error:(NSError * _Nullable * _Nullable)error;

  

三、在viewDidLoad里面写调用的demo。当然,从调用demo和自动生成的yoloModel类之间还有很多工作要做,比如说,图像的预处理,比如说得到预测output之后还要对其进行解析得到矩形框信息等,所以我中间封装了一层,这是后话:

  

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.

    //load image
    NSString* imagePath_=[[NSBundle mainBundle] pathForResource:@"dog416" ofType:@"jpg"];
    std::string imgPath = std::string([imagePath_ UTF8String]);
    cv::Mat image = cv::imread(imgPath);
    cv::cvtColor(image, image, CV_BGR2RGBA);

    //set classtxt path
    NSString* classtxtPath_ = [ [NSBundle mainBundle] pathForResource:@"classtxt" ofType:@"txt"];
    std::string classtxtPath = std::string([classtxtPath_ UTF8String]);

    //init Detection
    bool useCpuOny = false;
    MLComputeUnits computeUnit = MLComputeUnitsAll;
    cv::Size scaleSize(608, 608);
    CDetectObject objectDetection;
    objectDetection.init(useCpuOny, computeUnit, classtxtPath, scaleSize);

    //run detection
    std::vector<DetectionInfo> detectionResults;
    objectDetection.implDetection(image, detectionResults);

    //draw rectangles
    cv::Mat showImage;
    cv::resize(image, showImage, scaleSize);
    for (int i=0; i<detectionResults.size();i++)
    {
        cv::rectangle(showImage,detectionResults[i].box, cv::Scalar(255, 0,0), 3);
    }

    //show in iphone
    cv::cvtColor(showImage, showImage, cv::COLOR_RGBA2BGRA);
    [self showUIImage:showImage];
}

  上面加粗的地方就是自己封装的类CDetectObject,该类暴露的两个接口是init和implDetection。

  init接收设置的计算设备信息、类别标签文件的路径,以及模型接收的图像尺寸大小。

  implDetection接收输入的图像(RGBA格式),输出检测结果结构体信息,里面包含每个目标属于的类别名、置信度、以及矩形框信息。

struct DetectionInfo {
    std::string name;
    float confidence;
    cv::Rect2d box;
};

四、来让我们看看都要做哪些初始化init操作

  包括计算设备的设置、模型初始化、一些基本参数的初始化、和加载标签文件信息。

//init model
int CDetectObject::init(const BOOL useCpuOnly, const MLComputeUnits computeUnit, const std::string& classtxtPath, const cv::Size& scaleSize){

    //init configuration
    option = [[MLPredictionOptions alloc] init];
    option.usesCPUOnly = useCpuOnly;

    config = [ [MLModelConfiguration alloc] init];
    config.computeUnits = computeUnit;

    NSError* err;
    Model = [[yoloModel alloc] initWithConfiguration:config error:&err];

    //init paramss
    inputSize = scaleSize;
    maxBoundingBoxes = 10;
    confidenceThreshold = 0.5;
    nmsThreshold = 0.6;
    // anchor boxes
    anchors = {0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828};

    //load labels
    int ret = loadClasstxt(classtxtPath, classes);

    return ret;
}

五、再来看看执行预测时要做些什么:

  首先,对图像预处理,包括resize到模型要求的尺寸等。

  其次,将预处理后的结果送给prediction,得到预测结果。调用coreML自动生成的类预测接口就在这里了。

  然后,将预测得到的结果进行解析,根据yolov2模型的输出feature结构来解析出上面DetectionInfo里面的信息。

  最后,解析出来后会有大量矩形框,为了去除重复的矩形框信息,需要做一个nmsBox来除去重复量高的矩形框,得到最终结果。

int CDetectObject::implDetection(const cv::Mat& image, std::vector<DetectionInfo>& detectionResults){

    if(image.empty()){
        NSLog(@"Error! image is empty!");
        return -1;
    }

    //preprocessing
    cv::Mat inputImage;
    preprocessImage(image,  inputImage);

    //prediction
    MLMultiArray* outFeature = predictImageScene(inputImage);

    //analyze the output
    std::vector<int> idxList;
    std::vector<float> confidenceList;
    std::vector<cv::Rect> boxesList;
    parseFeature(outFeature, idxList, confidenceList, boxesList);

    //nms box
    std::vector<int> indices;
    cv::dnn::NMSBoxes(boxesList, confidenceList, confidenceThreshold, nmsThreshold, indices);

    //get result
    for (int i=0; i<indices.size(); i++){
        int idx = indices[i];
        DetectionInfo objectInfo;
        objectInfo.name = classes[idxList[idx]];
        objectInfo.confidence = confidenceList[idx];
        objectInfo.box = boxesList[idx];
        detectionResults.push_back(objectInfo);
    }

    return 0;
}

  预测函数:

MLMultiArray* CDetectObject::predictImageScene(const cv::Mat& imgTensor) {
    //preprocess image

    //convert to cvPixelbuffer
    ins::PixelBufferPool mat2pixelbuffer;
    CVPixelBufferRef buffer = mat2pixelbuffer.GetPixelBuffer(imgTensor);

    //predict from image
    NSError *error;
    yoloModelInput  *input = [[yoloModelInput alloc] initWithInput__0:buffer];

    yoloModelOutput *output = [Model predictionFromFeatures:input options:option error:&error];

    return output.output__0;
}

  解析feature函数:

void CDetectObject::parseFeature(MLMultiArray* feature, std::vector<int>& ids, std::vector<float>& confidences, std::vector<cv::Rect>& boxes){

    NSArray<NSNumber*>* featureShape = feature.shape;
    int d0 = [[featureShape objectAtIndex:0] intValue];
    int d1 = [[featureShape objectAtIndex:1] intValue];
    int d2 = [[featureShape objectAtIndex:2] intValue];

    int stride0 = [feature.strides[0] intValue];
    int stride1 = [feature.strides[1] intValue];
    int stride2 = [feature.strides[2] intValue];

    int blockSize = 32;
    int gridHeight = d1;
    int gridWidth = d2;
    int boxesPerCell = 5;//Int(anchors.count/5)
    int numClasses = (int)classes.size();

    double* pdata = (double*)feature.dataPointer;

    for (int cy =0; cy< gridHeight; cy++){
        for (int cx =0; cx< gridWidth; cx++){
            for (int b=0; b<boxesPerCell; b++){
                int channel = b*(numClasses + 5);

                int laterId= cx*stride2+cy*stride1;
                float tx = (float)pdata[channel*stride0 + laterId];
                float ty = (float)pdata[(channel+1)*stride0 + laterId];
                float tw = (float)pdata[(channel+2)*stride0 + laterId];
                float th = (float)pdata[(channel+3)*stride0 + laterId];
                float tc = (float)pdata[(channel+4)*stride0 + laterId];

                // The predicted tx and ty coordinates are relative to the location
                // of the grid cell; we use the logistic sigmoid to constrain these
                // coordinates to the range 0 - 1. Then we add the cell coordinates
                // (0-12) and multiply by the number of pixels per grid cell (32).
                // Now x and y represent center of the bounding box in the original
                // 608x608 image space.
                float x = (float(cx) + sigmoid(tx)) * blockSize;
                float y = (float(cy) + sigmoid(ty)) * blockSize;

                // The size of the bounding box, tw and th, is predicted relative to
                // the size of an "anchor" box. Here we also transform the width and
                // height into the original 608x608 image space.
                float w = exp(tw) * anchors[2*b] * blockSize;
                float h = exp(th) * anchors[2*b + 1] * blockSize;

                // The confidence value for the bounding box is given by tc. We use
                // the logistic sigmoid to turn this into a percentage.
                float confidence = sigmoid(tc);
                std::vector<float> classesProb(numClasses);
                for (int i = 0; i < numClasses; ++i) {
                    int offset = (channel+5+i)*stride0 + laterId;
                    classesProb[i] =  (float)pdata[offset];
                }
                softmax(classesProb);

                // Find the index of the class with the largest score.
                auto max_itr = std::max_element(classesProb.begin(), classesProb.end());
                int index = int(max_itr - classesProb.begin());

                // Combine the confidence score for the bounding box, which tells us
                // how likely it is that there is an object in this box (but not what
                // kind of object it is), with the largest class prediction, which
                // tells us what kind of object it detected (but not where).
                float confidenceInClass = classesProb[index] * confidence;
                if(confidence>confidenceThreshold){
                // Since we compute 19x19x5 = 1805 bounding boxes, we only want to
                // keep the ones whose combined score is over a certain threshold.
                //if (confidenceInClass > confidenceThreshold){
                    cv::Rect2d rect =cv::Rect2d(float(x-w*0.5), float(y-h*0.5), float(w), float(h));
                    ids.push_back(index);
                    confidences.push_back(confidenceInClass);
                    boxes.push_back(rect);
                }
            }
        }
    }
}

六、来看看预测结果如何:

  开发环境:MacOS  Mojave (10.14.3), Xcode10.2 ,  Iphone XS (IOS 12.2), opencv2framework.

  

    

  

上面代码我放在码云git上:https://gitee.com/rxdj/yolov2_object_detection.git  。

仅供参考,如有错误,望不吝赐教。

原文地址:https://www.cnblogs.com/riddick/p/10703787.html

时间: 2024-11-08 00:01:04

手撕coreML之yolov2 object detection物体检测(含源代码)的相关文章

谷歌开源的TensorFlow Object Detection API视频物体识别系统实现(二)[超详细教程] ubuntu16.04版本

本节对应谷歌开源Tensorflow Object Detection API物体识别系统 Quick Start步骤(一): Quick Start: Jupyter notebook for off-the-shelf inference 本节步骤较为简单,具体操作如下: 1.在第一节安装好jupyter之后,在ternimal终端进入到models文件夹目录下,执行命令: jupyter-notebook 2.会在网页打开Jupyter访问object_detection文件夹,进入obj

Tensorflow object detection API 搭建属于自己的物体识别模型

一.下载Tensorflow object detection API工程源码 网址:https://github.com/tensorflow/models,可通过Git下载,打开Git Bash,输入git clone https://github.com/tensorflow/models.git进行下载. 二.标记需要训练的图片 ①.在第一步下载的工程文件models\research\object_detection目录下,建立一个my_test_images用来放测试test和训练t

CV:object detection(Haar)

一. Haar分类器的前世今生 人脸检测属于计算机视觉的范畴,早期人们的主要研究方向是人脸识别,即根据人脸来识别人物的身份,后来在复杂背景下的人脸检测需求越来越大,人脸检测也逐渐作为一个单独的研究方向发展起来. 目前的人脸检测方法主要有两大类:基于知识和基于统计. "基于知识的方法主要利用先验知识将人脸看作器官特征的组合,根据眼睛.眉毛.嘴巴.鼻子等器官的特征以及相互之间的几何位置关系来检测人脸.基于统计的方法则将人脸看作一个整体的模式--二维像素矩阵,从统计的观点通过大量人脸图像样本构造人脸模

Regionlets for Generic Object Detection

Regionlets for Generic Object Detection 本文是对这篇文章的翻译和自我理解,文章下载地址:http://download.csdn.net/detail/autocyz/8569687 摘要: 对于一般物体检测,现在面对的问题是如何用比较简单的计算方法来解决物体的角度的变化所带来的识别问题.要想解决这种问题,那就必须要求有一种灵活的物体描述方法,并且这种方法对于处在不同位置的物体均能进行很好的评判. 基于这种情况,作者采用级联的boosting分类器,建立了

(转)Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks(更快的RCNN:通过区域提议网络实现实时)

原文出处 感谢作者~ Faster R-CNN: Towards Real-Time Object Detection with Region ProposalNetworks Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun 摘要 目前最先进的目标检测网络需要先用区域建议算法推测目标位置,像SPPnet[7]和Fast R-CNN[5]这些网络已经减少了检测网络的运行时间,这时计算区域建议就成了瓶颈问题.本文中,我们介绍一种区域建议网络(Reg

Rapid Object Detection using a Boosted Cascade of Simple Features 部分翻译

Rapid ObjectDetection using a Boosted Cascade of Simple Features 使用简单特征级联分类器的快速目标检测 注:部分翻译不准出以红色字体给出 翻译,Tony,[email protected] 摘要: 本文介绍一种机器学习在目标检测中的视觉应用,其能够迅速的处理图像,并能达到一个较高的识别率.这项工作之所以有成就是因为存在以下三个关键特点:第一介绍一种新型的图像表示,我们称之为"积分图像",其允许我们探测器使用的特征可以快速的

论文: Feature Pyramid Networks for Object Detection

论文阅读: Feature Pyramid Networks for Object Detection Feature Pyramid 是提取图像特征领域的很重要的概念.在深度学习领域没有被提及是因为目前深度学习仍然受到计算量的限制. 本论文根据不同的feature maps给出了 Feature Pyramid Network,因为 Featrue Pyramid的尺度不变性,可以有效的解决Object Detection 中的目标物体不一致大小的问题. 熟悉图像处理的都知道 sift 算法,

TensorFlow使用object detection训练并识别自己的模型

使用object detection训练并识别自己的模型 1.安装tensorflow(version>=1.4.0) 2.部署tensorflow models - 在这里下载 - 解压并安装 - 解压后重命名为models复制到tensorflow/目录下 - 在linux下 - 进入tensorflow/models/research/目录,运行protoc object_detection/protos/*.proto --python_out=. - 在~/.bashrc file.中

[Arxiv1706] Few-Example Object Detection with Model Communication 论文笔记

p.p1 { margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px "Helvetica Neue"; color: #042eee } p.p2 { margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px "Helvetica Neue"; color: #323333 } p.p3 { margin: 0.0px 0.0px 0.0px 0.0px; font: 15.0px "