SqueezeNet:AlexNet-level Accuracy with 50x fewer parameters and less than 0.5Mb model size

- Fire modules consisting of a ‘squeeze‘ layer with 1*1 filters feeding an ‘expand‘ layer with 1*1 and 3*3 filters(通過餵食一個包含1*1和3*3濾波器的‘擴展‘層,激勵包含一個‘擠壓‘層的模塊)

- AlexNet level accuracy on ImageNet with 50x fewer parameters(具有AlexNet水平的精度,卻少了50倍的參數量)

- Can compress to 510x smaller than AlexNet(0.5Mb)(可以比AlexNet壓縮510倍)

原文地址:https://www.cnblogs.com/2008nmj/p/9136509.html

时间: 2024-10-18 10:53:25

SqueezeNet:AlexNet-level Accuracy with 50x fewer parameters and less than 0.5Mb model size的相关文章

SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE

论文阅读笔记 转载请注明出处: http://www.cnblogs.com/sysuzyq/p/6186518.html By 少侠阿朱

Parameter Passing / Request Parameters in JSF 2.0 (转)

This Blog is a compilation of various methods of passing Request Parameters in JSF (2.0 +) (1)  f:viewParam One of the features added in JSF 2.0 is "View Parameters"; Simply speaking it allows adding "Query string" or "Request Par

轻量化卷积神经网络模型总结by wilson(shffleNet,moblieNet,squeezeNet+Xception)

一.简介 主要参考博客:纵览轻量化卷积神经网络 https://zhuanlan.zhihu.com/p/32746221 1, SqueezeNet: SqueezeNet对比AlexNet能够减少50倍的网络参数,但是却拥有相近的性能.SqueezeNet主要强调用1x1的卷积核进行feature map个数的压缩,从而达到大量减少网络参数的目的.在构造网络的时候,采用VGG的堆叠思想. 2, moblieNet: MobileNet采用depth-wise convolution的卷积方式

Light Weight CNN模型的分析与总结

本文选择了4个light weight CNN模型,并对它们的设计思路和性能进行了分析与总结,目的在于为在完成图像识别任务时模型的选择与设计方面提供相关的参考资料. 1 简介 自AlexNet[1]在LSVRC-2010 ImageNet[22]图像分类任务上取得突破性进展之后,构建更深更大的convolutional neural networks(CNN)几乎成了一种主要的趋势[2-9].通常,获得state-of-the-art准确率的模型都有成百上千的网路层以及成千上万的中间特征通道,这

(转) The Incredible PyTorch

转自:https://github.com/ritchieng/the-incredible-pytorch The Incredible PyTorch What is this? This is inspired by the famous Awesome TensorFlow repository where this repository would hold tutorials, projects, libraries, videos, papers, books and anythi

文献 | 2010-2016年被引用次数最多的深度学习论文(修订版)

本来来自 :http://blog.csdn.net/u010402786/article/details/51682917 一.书籍 Deep learning (2015) 作者:Bengio 下载地址:http://www.deeplearningbook.org/ 二.理论 1.在神经网络中提取知识 Distilling the knowledge in a neural network 作者:G. Hinton et al. 2.深度神经网络很易受骗:高信度预测无法识别的图片 Deep

models-caffes-大全

caffe的伯克利主页:http://caffe.berkeleyvision.org/caffe的github主页:https://github.com/BVLC/caffe caffe的models: http://dl.caffe.berkeleyvision.org/ Index of / ../ mit_mini_places/ 01-Mar-2016 12:18 - bvlc_alexnet.caffemodel 22-Aug-2014 04:36 243862414 bvlc_go

CS231n笔记 Lecture 9, CNN Architectures

Review: LeNet-5 1998 by LeCun, one conv layer. Case Study: AlexNet [Krizhevsky et al. 2012] It uses a lot of mordern techniques where is still limited to historical issues (seperated feature maps, norm layers). Kind of obsolete, but it is the first C

從文本到視覺:各領域最前沿的論文集合

選自GitHub 作者:Simon Brugman 參與:吳攀 深度學習已經在語音識別.機器翻譯.圖像目標檢測和聊天機器人等許多領域百花齊放.近日,GitHub 用戶 Simon Brugman 發布了一個按任務分類的深度學習論文項目,其按照不同的任務類型列出了一些當前最佳的論文和對起步有用的論文. 目錄 1. 文本 1.1. 代碼生成(Code Generation) 1.2. 情感分析(Sentiment Analysis) 1.3. 翻譯(Translation) 1.4. 分類(Clas