Paper | Residual Dense Network for Image Super-Resolution

目录

  • Residual dense block & network
  • 和DenseNet的不同
  • 摘要和结论

发表在2018年CVPR。

摘要和结论都在强调方法的优势。我们还是先从RDN的结构看起,再理解它的背景和思想。

Residual dense block & network

乍一看,这种block结构就是在内部采用了稠密连接,在外部采用残差学习。并且,RDN在全局上也是类似的设计:内部稠密,整体残差。无论是RDB还是RDN,内部都同时采用了\(3 \times 3\)和\(1 \times 1\)卷积。

我们来看看作者怎么解释这种设计的合理性,以及实验是否验证了其有效性。

和DenseNet的不同

  1. 在RDN和RDB中,我们取消了BN和池化层,因为作者认为它们不仅消耗资源,而且阻碍了网络学习(批注:在一些去噪工作中,有些作者也发现了BN无益于除高斯噪声以外的噪声去除)。
  2. 在DenseNet中,不同的block之间需要过渡层,但在这里采用\(1 \times 1\)卷积,即所谓的local feature fusion。(批注:本质是一样的,只不过过渡层多了BN和池化层,因为需要服务于高层视觉任务——图像分类)
  3. 全局和局部都有残差学习,而DenseNet中没有。这种局部残差连接,使得上一个RDB的输出,可以直接联系至当前RDB的输出结果。这就是作者所谓的contiguous memory(CM)。

算了,看完解释,我已经不想看实验了,因为还是比较trick的(没有太多能让人high的思想点,解释有点勉强)。我们回头看看摘要和结论吧。

摘要和结论

摘要

A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.

结论

In this paper, we proposed a very deep residual dense network (RDN) for image SR, where residual dense block (RDB) serves as the basic build module. In each RDB, the dense connections between each layers allow full usage of local layers. The local feature fusion (LFF) not only stabilizes the training wider network, but also adaptively controls the preservation of information from current and preceding RDBs. RDB further allows direct connections between the preceding RDB and each layer of current block, leading to a contiguous memory (CM) mechanism. The local residual leaning (LRL) further improves the flow of information and gradient. Moreover, we propose global feature fusion (GFF) to extract hierarchical features in the LR space. By fully using local and global features, our RDN leads to a dense feature fusion and deep supervision. We use the same RDN structure to handle three degradation models and real-world data. Extensive benchmark evaluations well demonstrate that our RDN achieves superiority over state-of-theart methods.

我来翻译一下:

  1. 在每个RDB内部,都有一个全局短连接;因此上一个RDB的输出,会直接送到当前RDB的输出端;这就是作者所谓的连续记忆(contiguous memory)机制。
  2. 每个RDB之间采用了\(1 \times 1\)卷积,作者将其称为local feature fusion;这不就是大家都在用的、降低通道数的方法嘛,有点故弄玄虚哦。作者还强调:该LFF可以稳定宽网络的训练。实际上,DenseNet为了降低计算量,特地让网络更窄。这是在增大冗余(增强泛化能力)和减小计算量之间的权衡,详情参见我的博客

原文地址:https://www.cnblogs.com/RyanXing/p/11617352.html

时间: 2024-11-09 06:20:12

Paper | Residual Dense Network for Image Super-Resolution的相关文章

Paper | Dynamic Residual Dense Network for Image Denoising

目录 故事背景 DRDN 训练 发表于2019 Sensors. 摘要 Deep convolutional neural networks have achieved great performance on various image restoration tasks. Specifically, the residual dense network (RDN) has achieved great results on image noise reduction by cascading

Paper | Residual Attention Network for Image Classification

目录 1. 相关工作 2. Residual Attention Network 2.1 Attention残差学习 2.2 自上而下和自下而上 2.3 正则化Attention 最近看了些关于attention的文章.Attention是比较好理解的人类视觉机制,但怎么用在计算机问题上并不简单. 实际上15年之前就已经有人将attention用于视觉任务,但为什么17年最简单的SENet取得了空前的成功?其中一个原因是,前人的工作大多考虑空间上的(spatial)注意力,而SENet另辟蹊径,

Review of Image Super-resolution Reconstruction Based on Deep Learning

Abstract With the deep learning method being applied to image super-resolution (SR), SR methods based on deep learning have achieved better reconstruction results than traditional SR methods. This paper briefly summarizes SR methods based on deep lea

图像超分辨率项目帮你「拍」出高清照片

相机不够算法凑,拥有超级拍照能力的手机也离不开算法的加持.本文介绍的图像超分辨率项目可以帮你补齐相机镜头的短板. 华为 P30 发布会上展示的埃菲尔铁塔高清远距离照片 今天,一位 Reddit 网友贴出了自己基于 Keras 的图像超分辨率项目,可以让照片放大后依然清晰.先来看一下效果. 放大数倍后,照片中的蝴蝶(蛾子?)依然没有失真,背上的绒毛清晰可见 作者表示,该项目旨在改善低分辨率图像的质量,使其焕然一新.使用该工具可以对图像进行超级放缩,还能很容易地在 RDN 和 GAN上进行实验. 该

paper 15 :整理的CV代码合集

这篇blog,原来是西弗吉利亚大学的Li xin整理的,CV代码相当的全,不知道要经过多长时间的积累才会有这么丰富的资源,在此谢谢LI Xin .我现在分享给大家,希望可以共同进步!还有,我需要说一下,不管你的理论有多么漂亮,不管你有多聪明,如果没有实验来证明,那么都是错误的.  OK~本博文未经允许,禁止转载哦!  By  wei shen Reproducible Research in Computational Science “It doesn't matter how beautif

CVPR 2017 Paper list

CVPR2017 paper list Machine Learning 1 Spotlight 1-1A Exclusivity-Consistency Regularized Multi-View Subspace Clustering Xiaojie Guo, Xiaobo Wang, Zhen Lei, Changqing Zhang, Stan Z. Li Borrowing Treasures From the Wealthy: Deep Transfer Learning Thro

network embedding 需读论文

Must-read papers on NRL/NE. github: https://github.com/nate-russell/Network-Embedding-Resources NRL: network representation learning. NE: network embedding. Contributed by Cunchao Tu and Yuan Yao. DeepWalk: Online Learning of Social Representations. 

Improving Network Management with Software Defined Networking

Name of article:Improving Network Management with  Software Defined Networking Origin of the article:Kim H , Feamster N . Improving network management with software defined networking[J]. IEEE Communications Magazine, 2013, 51(2):114-119. ABSTRACT: N

{ICIP2014}{收录论文列表}

This article come from HEREARS-L1: Learning Tuesday 10:30–12:30; Oral Session; Room: Leonard de Vinci 10:30  ARS-L1.1—GROUP STRUCTURED DIRTY DICTIONARY LEARNING FOR CLASSIFICATION Yuanming Suo, Minh Dao, Trac Tran, Johns Hopkins University, USA; Hojj