Paper | Densely Connected Convolutional Networks

目录

论文Densely Connected Convolutional Networks,CVPR 2017

摘要

Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has \(\frac{L(L+1)}{2}\) direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

结论

We proposed a new convolutional network architecture, which we refer to as Dense Convolutional Network (DenseNet). It introduces direct connections between any two layers with the same feature-map size. We showed that DenseNets scale naturally to hundreds of layers, while exhibiting no optimization difficulties. In our experiments, DenseNets tend to yield consistent improvement in accuracy with growing number of parameters, without any signs of performance degradation or overfitting. Under multiple settings, it achieved state-of-the-art results across several highly competitive datasets. Moreover, DenseNets require substantially fewer parameters and less computation to achieve state-of-the-art performances. Because we adopted hyperparameter settings optimized for residual networks in our study, we believe that further gains in accuracy of DenseNets may be obtained by more detailed tuning of hyperparameters and learning rate schedules. Whilst following a simple connectivity rule, DenseNets naturally integrate the properties of identity mappings, deep supervision, and diversified depth. They allow feature reuse throughout the networks and can consequently learn more compact and, according to our experiments, more accurate models. Because of their compact internal representations and reduced feature redundancy, DenseNets may be good feature extractors for various computer vision tasks that build on convolutional features, e.g., [4, 5]. We plan to study such feature transfer with DenseNets in future work.

原文地址:https://www.cnblogs.com/RyanXing/p/11606897.html

时间: 2024-10-07 22:56:14

Paper | Densely Connected Convolutional Networks的相关文章

【Network Architecture】Densely Connected Convolutional Networks 论文解析

0. Paper link 1. Overview ??文章开篇提到了如果在靠近输入与输出的层之间存在短连接(shorter connections),可以训练更深.更准确.更有效的卷积网络,DenseNet利用了这个性质,每层都与之前所有的层进行连接,即之前所有层的feature map都作为这一层的输入.DenseNet有减少梯度消失,增强特征传递,鼓励特征重利用同时极大的减少了参数的数量.在很多任务上达到了state-of-the-art. ??另外DenseNet并不是像ResNet那样

论文笔记 Densely Connected Convolutional Networks

首先我们从宏观的角度理解一下这篇论文做了什么.这篇论文引入了一个"Dense Block",该模块的的组成如下图所示(要点就是,Input输入到后续的每一层,每一层都输入到后续层) 在实际应用的时候,如果我们将"Dense Block"作为一个building block,那么可以按照如下的方式构建深度网络结构(是不是一下子就理解了这篇文章做了什么?).  下面我们来分析一下这个"Dense Block"的一些特点 "Dense Blo

【阅读笔记】3D Densely Convolutional Networks for Volumetric Segmentation

3D Densely Convolutional Networks for Volumetric Segmentation  Toan Duc Bui, Jitae Shin, and Taesup Moon? School of Electronic and Electrical Engineering, Sungkyunkwan University, Republic of Korea 任务: 六个月婴儿脑部分割(四分类)white matter (WM), gray mater (GM)

(转)ResNet, AlexNet, VGG, Inception: Understanding various architectures of Convolutional Networks

ResNet, AlexNet, VGG, Inception: Understanding various architectures of Convolutional Networks by KOUSTUBH this blog from: http://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-inception/ Convolutional neural networks are fantastic for visual recogn

论文阅读笔记:Fully Convolutional Networks for Semantic Segmentation

这是CVPR 2015拿到best paper候选的论文. 论文下载地址:Fully Convolutional Networks for Semantic Segmentation 尊重原创,转载请注明:http://blog.csdn.net/tangwei2014 1.概览&主要贡献 提出了一种end-to-end的做semantic segmentation的方法,简称FCN. 如下图所示,直接拿segmentation 的 ground truth作为监督信息,训练一个端到端的网络,让

论文阅读(BaiXiang——【CVPR2016】Multi-Oriented Text Detection with Fully Convolutional Networks)

BaiXiang--[CVPR2016]Multi-Oriented Text Detection with Fully Convolutional Networks 目录 作者和相关链接 方法概括 方法细节 创新点和贡献 实验结果 问题讨论 总结与收获点 作者和相关链接 作者: paper下载 方法概括 Step 1--文本块检测: 先利用text-block FCN得到salient map,再对salient map进行连通分量分析得到text block: Step 2--文本线形成:

Very Deep Convolutional Networks for Large-Scale Image Recognition

Very Deep Convolutional Networks for Large-Scale Image Recognition 转载请注明:http://blog.csdn.net/stdcoutzyx/article/details/39736509 这篇论文是今年9月份的论文[1],比較新,当中的观点感觉对卷积神经网络的參数调整大有指导作用,特总结之. 关于卷积神经网络(Convolutional Neural Network, CNN),笔者后会作文阐述之,读者若心急则或可用谷歌百度

论文: Deformable Convolutional Networks

论文: Deformable Convolutional Networks CNN因为其内部的固定的网络结构,对模型几何变换的识别非常有限. 本paper给出了两个模块deformable convolution 和 deformable ROI-Pooling来提高CNN的模型变换能力. 过去的办法解决几何变换的方法,一,使用data Augmentation来增大不同几何形状的object,二,使用sift 或者 sliding windows这样的方法来解决. 本paper主要针对三个mo

RCNN学习笔记(8):Fully Convolutional Networks for Semantic Segmentation(全卷积网络FCN)

[论文信息] <Fully Convolutional Networks for Semantic Segmentation> CVPR 2015 best paper Reference link: http://blog.csdn.net/tangwei2014 http://blog.csdn.net/u010025211/article/details/51209504 概览&主要贡献 提出了一种end-to-end的做semantic segmentation的方法,简称FC