目标检测比赛---Google AI Open Images - Object Detection Track

https://www.kaggle.com/c/google-ai-open-images-object-detection-track#Evaluation

Submissions are evaluated by computing mean Average Precision (AP), modified to take into account the annotation process of Open Images dataset (mean is taken over per-class APs). The metric is described on the Open Images Challenge website.

The final mAP is computed as the average AP over the 500 classes. The participants will be ranked on this final metric.

Kaggle‘s production code in C# can be viewed here. The metric is also implemented as a part of Tensorflow Object Detection API. See this Tutorial on running the evaluation in Python.

Kernel Submissions

You can make submissions directly from Kaggle Kernels. By adding your teammates as collaborators on a kernel, you can share and edit code privately with them.

Submission File

For each image in the test set, you must predict a list of boxes describing objects in the image. Each box is described as

ImageID,PredictionString
ImageID,{Label Confidence XMin YMin XMax YMax},{...}

tensorflow 自带评测函数----https://github.com/tensorflow/models/tree/master/research/object_detection评测函数介绍: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/challenge_evaluation.md

原文地址:https://www.cnblogs.com/Allen-rg/p/10645224.html

时间: 2024-08-27 11:53:15

目标检测比赛---Google AI Open Images - Object Detection Track的相关文章

论文阅读之: 目标检测 TIP 2014 SuBSENSE: A Universal Change Detection Method with Local Adaptive Sensitivity

SuBSENSE: A Universal Change Detection Method with Local Adaptive Sensitivity Abstract:Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the

ICCV2013、CVPR2013、ECCV2013目标检测相关论文

CVPapers 网址: http://www.cvpapers.com/   ICCV2013 Papers about Object Detection: 1. Regionlets for Generic Object Detection. Xiaoyu Wang, Ming Yang, Shenghuo Zhu, Yuanqing Lin .(暂无源码提供) Website: http://www.xiaoyumu.com/project/detection 这篇文章提出了一种新的特征描

目标检测方法__第一版

这篇写在两年前!!! 目标检测问题,对应英文:Target Detection 下面是解决这类问题的state-of-the-art方法的基本介绍: https://www.zhihu.com/question/34223049 https://zhuanlan.zhihu.com/p/21533724 http://zhangliliang.com/2015/05/19/paper-note-object-proposal-review-pami15/ http://people.eecs.b

An Analysis of Scale Invariance in Object Detection – SNIP 论文解读

前言 本来想按照惯例来一个overview的,结果看到一篇十分不错而且详细的介绍,因此copy过来,自己在前面大体总结一下论文,细节不做赘述,引用文章讲得很详细. 论文概述 引用文章 以下内容来自:http://lowrank.science/SNIP/ 这篇日志记录一些对下面这篇 CVPR 2018 Oral 文章的笔记. Singh B, Davis L S. An Analysis of Scale Invariance in Object Detection–SNIP[C]//Proce

AI佳作解读系列(五) - 目标检测二十年技术综述

计算机视觉中的目标检测,因其在真实世界的大量应用需求,比如自动驾驶.视频监控.机器人视觉等,而被研究学者广泛关注. 上周四,arXiv新出一篇目标检测文献<Object Detection in 20 Years: A Survey>,其对该领域20年来出现的技术进行了综述,这是一篇投向PAMI的论文,作者们review了400+篇论文,总结了目标检测发展的里程碑算法和state-of-the-art,并且难能可贵的对算法流程各个技术模块的演进也进行了说明,还深入到目标检测的特定领域,如人脸检

第三十一节,使用谷歌Object Detection API进行目标检测

Object Detection API是谷歌开放的一个内部使用的物体识别系统.2016年 10月,该系统在COCO识别挑战中名列第一.它支持当前最佳的实物检测模型,能够在单个图像中定位和识别多个对象.该系统不仅用于谷歌于自身的产品和服务,还被推广至整个研究社区. 一.代码位置与内置的模型 1.Object Detection Object Detection模块的位置与slim的位置相近,同在github.com 中TensorFlow 的models\research目录下.类似slim,

第三十四节,目标检测之谷歌Object Detection API源码解析

我们在第三十二节,使用谷歌Object Detection API进行目标检测.训练新的模型(使用VOC 2012数据集)那一节我们介绍了如何使用谷歌Object Detection API进行目标检测,以及如何使用谷歌提供的目标检测模型训练自己的数据.在训练自己的数据集时,主要包括以下几步: 制作自己的数据集,注意这里数据集在进行标注时,需要按照一定的格式.然后调object_detection\dataset_tools下对应的脚本生成tfrecord文件.如下图,如果我们想调用create

10 行Python 代码,实现 AI 目标检测技术,真给力!

只需10行Python代码,我们就能实现计算机视觉中目标检测. from imageai.Detection import ObjectDetection import os execution_path = os.getcwd() detector = ObjectDetection() detector.setModelTypeAsRetinaNet() detector.setModelPath( os.path.join(execution_path , "resnet50_coco_b

目标检测 | 经典算法 Cascade R-CNN: Delving into High Quality Object Detection

作者从detector的overfitting at training/quality mismatch at inference问题入手,提出了基于multi-stage的Cascade R-CNN,该网络结构清晰,效果显著,并且能简单移植到其它detector中,带来2-4%的性能提升 论文: Cascade R-CNN: Delving into High Quality Object Detection 论文地址: https://arxiv.org/abs/1712.00726 代码地