【猫狗数据集】使用预训练的resnet18模型

数据集下载地址:

链接:https://pan.baidu.com/s/1l1AnBgkAAEhh0vI5_loWKw
提取码:2xq4

创建数据集:https://www.cnblogs.com/xiximayou/p/12398285.html

读取数据集:https://www.cnblogs.com/xiximayou/p/12422827.html

进行训练:https://www.cnblogs.com/xiximayou/p/12448300.html

保存模型并继续进行训练:https://www.cnblogs.com/xiximayou/p/12452624.html

加载保存的模型并测试:https://www.cnblogs.com/xiximayou/p/12459499.html

划分验证集并边训练边验证:https://www.cnblogs.com/xiximayou/p/12464738.html

使用学习率衰减策略并边训练边测试:https://www.cnblogs.com/xiximayou/p/12468010.html

利用tensorboard可视化训练和测试过程:https://www.cnblogs.com/xiximayou/p/12482573.html

从命令行接收参数:https://www.cnblogs.com/xiximayou/p/12488662.html

使用top1和top5准确率来衡量模型:https://www.cnblogs.com/xiximayou/p/12489069.html

epoch、batchsize、step之间的关系:https://www.cnblogs.com/xiximayou/p/12405485.html

之前都是从头开始训练模型,本节我们要使用预训练的模型来进行训练。

只需要在train.py中加上:

  if baseline:
    model =torchvision.models.resnet18(pretrained=False)
    model.fc = nn.Linear(model.fc.in_features,2,bias=False)
  else:
    print("使用预训练的resnet18模型")
    model=torchvision.models.resnet18(pretrained=True)
    for i in model.state_dict():
      print(i)
    model.fc = nn.Linear(model.fc.in_features,2,bias=False)
    print(model)
使用预训练的resnet18模型
conv1.weight
bn1.weight
bn1.bias
bn1.running_mean
bn1.running_var
bn1.num_batches_tracked
layer1.0.conv1.weight
layer1.0.bn1.weight
layer1.0.bn1.bias
layer1.0.bn1.running_mean
layer1.0.bn1.running_var
layer1.0.bn1.num_batches_tracked
layer1.0.conv2.weight
layer1.0.bn2.weight
layer1.0.bn2.bias
layer1.0.bn2.running_mean
layer1.0.bn2.running_var
layer1.0.bn2.num_batches_tracked
layer1.1.conv1.weight
layer1.1.bn1.weight
layer1.1.bn1.bias
layer1.1.bn1.running_mean
layer1.1.bn1.running_var
layer1.1.bn1.num_batches_tracked
layer1.1.conv2.weight
layer1.1.bn2.weight
layer1.1.bn2.bias
layer1.1.bn2.running_mean
layer1.1.bn2.running_var
layer1.1.bn2.num_batches_tracked
layer2.0.conv1.weight
layer2.0.bn1.weight
layer2.0.bn1.bias
layer2.0.bn1.running_mean
layer2.0.bn1.running_var
layer2.0.bn1.num_batches_tracked
layer2.0.conv2.weight
layer2.0.bn2.weight
layer2.0.bn2.bias
layer2.0.bn2.running_mean
layer2.0.bn2.running_var
layer2.0.bn2.num_batches_tracked
layer2.0.downsample.0.weight
layer2.0.downsample.1.weight
layer2.0.downsample.1.bias
layer2.0.downsample.1.running_mean
layer2.0.downsample.1.running_var
layer2.0.downsample.1.num_batches_tracked
layer2.1.conv1.weight
layer2.1.bn1.weight
layer2.1.bn1.bias
layer2.1.bn1.running_mean
layer2.1.bn1.running_var
layer2.1.bn1.num_batches_tracked
layer2.1.conv2.weight
layer2.1.bn2.weight
layer2.1.bn2.bias
layer2.1.bn2.running_mean
layer2.1.bn2.running_var
layer2.1.bn2.num_batches_tracked
layer3.0.conv1.weight
layer3.0.bn1.weight
layer3.0.bn1.bias
layer3.0.bn1.running_mean
layer3.0.bn1.running_var
layer3.0.bn1.num_batches_tracked
layer3.0.conv2.weight
layer3.0.bn2.weight
layer3.0.bn2.bias
layer3.0.bn2.running_mean
layer3.0.bn2.running_var
layer3.0.bn2.num_batches_tracked
layer3.0.downsample.0.weight
layer3.0.downsample.1.weight
layer3.0.downsample.1.bias
layer3.0.downsample.1.running_mean
layer3.0.downsample.1.running_var
layer3.0.downsample.1.num_batches_tracked
layer3.1.conv1.weight
layer3.1.bn1.weight
layer3.1.bn1.bias
layer3.1.bn1.running_mean
layer3.1.bn1.running_var
layer3.1.bn1.num_batches_tracked
layer3.1.conv2.weight
layer3.1.bn2.weight
layer3.1.bn2.bias
layer3.1.bn2.running_mean
layer3.1.bn2.running_var
layer3.1.bn2.num_batches_tracked
layer4.0.conv1.weight
layer4.0.bn1.weight
layer4.0.bn1.bias
layer4.0.bn1.running_mean
layer4.0.bn1.running_var
layer4.0.bn1.num_batches_tracked
layer4.0.conv2.weight
layer4.0.bn2.weight
layer4.0.bn2.bias
layer4.0.bn2.running_mean
layer4.0.bn2.running_var
layer4.0.bn2.num_batches_tracked
layer4.0.downsample.0.weight
layer4.0.downsample.1.weight
layer4.0.downsample.1.bias
layer4.0.downsample.1.running_mean
layer4.0.downsample.1.running_var
layer4.0.downsample.1.num_batches_tracked
layer4.1.conv1.weight
layer4.1.bn1.weight
layer4.1.bn1.bias
layer4.1.bn1.running_mean
layer4.1.bn1.running_var
layer4.1.bn1.num_batches_tracked
layer4.1.conv2.weight
layer4.1.bn2.weight
layer4.1.bn2.bias
layer4.1.bn2.running_mean
layer4.1.bn2.running_var
layer4.1.bn2.num_batches_tracked
fc.weight
fc.bias
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=512, out_features=2, bias=False)
)

接下来来看看如何冻结某些层,不让其在训练的时候进行梯度更新。

首先我们输出下信息看看结构:

i=0for child in model.children():    i+=1    print("第{}个child".format(str(i)))
    print(child)
第1个child
Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
第2个child
BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
第3个child
ReLU(inplace=True)
第4个child
MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
第5个child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (1): BasicBlock(
    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第6个child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (downsample): Sequential(
      (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (1): BasicBlock(
    (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第7个child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (downsample): Sequential(
      (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (1): BasicBlock(
    (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第8个child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (downsample): Sequential(
      (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (1): BasicBlock(
    (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第9个child
AdaptiveAvgPool2d(output_size=(1, 1))
第10个child
Linear(in_features=512, out_features=2, bias=False)

我们冻结前面的7个child,只更新第8、9、10个child的参数。可这么定义:

    print("使用预训练的resnet18模型")
    model=torchvision.models.resnet18(pretrained=True)
    model.fc = nn.Linear(model.fc.in_features,2,bias=False)
    i=0
    for child in model.children():
      i+=1
      #print("第{}个child".format(str(i)))
      #print(child)
      if i<=7:
        for param in child.parameters():
          param.requires_grad=False
    #我们打印下是否是设置成功
    for name, param in model.named_parameters():
      if param.requires_grad:
        print("需要梯度:", name)
      else:
        print("不需要梯度:", name)

接下来我们还要在优化器中过滤掉不需要更新参数的层:

  optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.1, momentum=0.9,
                            weight_decay=1*1e-4)

结果:

使用预训练的resnet18模型
不需要梯度: conv1.weight
不需要梯度: bn1.weight
不需要梯度: bn1.bias
不需要梯度: layer1.0.conv1.weight
不需要梯度: layer1.0.bn1.weight
不需要梯度: layer1.0.bn1.bias
不需要梯度: layer1.0.conv2.weight
不需要梯度: layer1.0.bn2.weight
不需要梯度: layer1.0.bn2.bias
不需要梯度: layer1.1.conv1.weight
不需要梯度: layer1.1.bn1.weight
不需要梯度: layer1.1.bn1.bias
不需要梯度: layer1.1.conv2.weight
不需要梯度: layer1.1.bn2.weight
不需要梯度: layer1.1.bn2.bias
不需要梯度: layer2.0.conv1.weight
不需要梯度: layer2.0.bn1.weight
不需要梯度: layer2.0.bn1.bias
不需要梯度: layer2.0.conv2.weight
不需要梯度: layer2.0.bn2.weight
不需要梯度: layer2.0.bn2.bias
不需要梯度: layer2.0.downsample.0.weight
不需要梯度: layer2.0.downsample.1.weight
不需要梯度: layer2.0.downsample.1.bias
不需要梯度: layer2.1.conv1.weight
不需要梯度: layer2.1.bn1.weight
不需要梯度: layer2.1.bn1.bias
不需要梯度: layer2.1.conv2.weight
不需要梯度: layer2.1.bn2.weight
不需要梯度: layer2.1.bn2.bias
不需要梯度: layer3.0.conv1.weight
不需要梯度: layer3.0.bn1.weight
不需要梯度: layer3.0.bn1.bias
不需要梯度: layer3.0.conv2.weight
不需要梯度: layer3.0.bn2.weight
不需要梯度: layer3.0.bn2.bias
不需要梯度: layer3.0.downsample.0.weight
不需要梯度: layer3.0.downsample.1.weight
不需要梯度: layer3.0.downsample.1.bias
不需要梯度: layer3.1.conv1.weight
不需要梯度: layer3.1.bn1.weight
不需要梯度: layer3.1.bn1.bias
不需要梯度: layer3.1.conv2.weight
不需要梯度: layer3.1.bn2.weight
不需要梯度: layer3.1.bn2.bias
需要梯度: layer4.0.conv1.weight
需要梯度: layer4.0.bn1.weight
需要梯度: layer4.0.bn1.bias
需要梯度: layer4.0.conv2.weight
需要梯度: layer4.0.bn2.weight
需要梯度: layer4.0.bn2.bias
需要梯度: layer4.0.downsample.0.weight
需要梯度: layer4.0.downsample.1.weight
需要梯度: layer4.0.downsample.1.bias
需要梯度: layer4.1.conv1.weight
需要梯度: layer4.1.bn1.weight
需要梯度: layer4.1.bn1.bias
需要梯度: layer4.1.conv2.weight
需要梯度: layer4.1.bn2.weight
需要梯度: layer4.1.bn2.bias
需要梯度: fc.weight

拓展:如果是我们自己定义的模型和预训练的模型不一致应该怎么加载参数呢?

这里以以resnet50为例,这里我们再新定义一个卷积神经网络:

# coding=UTF-8
import torchvision.models as models
import torch
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo

class CNN(nn.Module):

    def __init__(self, block, layers, num_classes=2):
        self.inplanes = 64
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        self.avgpool = nn.AvgPool2d(7, stride=1)
        #新增一个反卷积层
        self.convtranspose1 = nn.ConvTranspose2d(2048, 2048, kernel_size=3, stride=1, padding=1, output_padding=0, groups=1, bias=False, dilation=1)
        #新增一个最大池化层
        self.maxpool2 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
        #去掉原来的fc层,新增一个fclass层
        self.fclass = nn.Linear(2048, num_classes)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()

    def _make_layer(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * block.expansion),
            )

        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        #新加层的forward
        x = x.view(x.size(0), -1)
        x = self.convtranspose1(x)
        x = self.maxpool2(x)
        x = x.view(x.size(0), -1)
        x = self.fclass(x)

        return x

#加载model
resnet50 = models.resnet50(pretrained=True)
cnn = CNN(Bottleneck, [3, 4, 6, 3])
#读取参数#取出预训练模型的参数
pretrained_dict = resnet50.state_dict()#取出本模型的参数
model_dict = cnn.state_dict()
# 将pretrained_dict里不属于model_dict的键剔除掉
pretrained_dict =  {k: v for k, v in pretrained_dict.items() if k in model_dict}
# 更新现有的model_dict
model_dict.update(pretrained_dict)
# 加载我们真正需要的state_dict
cnn.load_state_dict(model_dict)
# print(resnet50)
print(cnn)

下面也摘取了一些使用部分预训练模型初始化网络的方法:

方式一: 自己网络和预训练网络结构一致的层,使用预训练网络对应层的参数批量初始化

model_dict = model.state_dict()                                    # 取出自己网络的参数字典
pretrained_dict = torch.load("I:/迅雷下载/alexnet-owt-4df8aa71.pth")# 加载预训练网络的参数字典
# 取出预训练网络的参数字典
keys = []
for k, v in pretrained_dict.items():
       keys.append(k)
i = 0

# 自己网络和预训练网络结构一致的层,使用预训练网络对应层的参数初始化
for k, v in model_dict.items():
    if v.size() == pretrained_dict[keys[i]].size():
         model_dict[k] = pretrained_dict[keys[i]]
         #print(model_dict[k])
         i = i + 1
model.load_state_dict(model_dict)

方式二:自己网络和预训练网络结构一致的层,按层初始化

# 加粗自己定义一个网络叫CNN
model = CNN()
model_dict = model.state_dict()                                    # 取出自己网络的参数

for k, v in model_dict.items():                                    # 查看自己网络参数各层叫什么名称
       print(k)

pretrained_dict = torch.load("I:/迅雷下载/alexnet-owt-4df8aa71.pth")# 加载预训练网络的参数
for k, v in pretrained_dict.items():                                    # 查看预训练网络参数各层叫什么名称
       print(k)

# 对应层赋值初始化
model_dict[‘conv1.0.weight‘] = pretrained_dict[‘features.0.weight‘] # 将自己网络的conv1.0层的权重初始化为预训练网络features.0层的权重
model_dict[‘conv1.0.bias‘] = pretrained_dict[‘features.0.bias‘]    # 将自己网络的conv1.0层的偏置项初始化为预训练网络features.0层的偏置项

model_dict[‘conv2.1.weight‘] = pretrained_dict[‘features.3.weight‘]
model_dict[‘conv1.1.bias‘] = pretrained_dict[‘features.3.bias‘]

model_dict[‘conv2.1.weight‘] = pretrained_dict[‘features.6.weight‘]
model_dict[‘conv2.1.bias‘] = pretrained_dict[‘features.6.bias‘]

... ...

下一节补充下计算数据集的标准差和方差,在数据增强时对数据进行标准化的时候用。

参考:

https://blog.csdn.net/feizai1208917009/article/details/103598233

https://blog.csdn.net/Arthur_Holmes/article/details/103493886?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task

https://blog.csdn.net/whut_ldz/article/details/78845947

原文地址:https://www.cnblogs.com/xiximayou/p/12504579.html

时间: 2024-08-11 17:07:07

【猫狗数据集】使用预训练的resnet18模型的相关文章

在 C/C++ 中使用 TensorFlow 预训练好的模型—— 直接调用 C++ 接口实现

现在的深度学习框架一般都是基于 Python 来实现,构建.训练.保存和调用模型都可以很容易地在 Python 下完成.但有时候,我们在实际应用这些模型的时候可能需要在其他编程语言下进行,本文将通过直接调用 TensorFlow 的 C/C++ 接口来导入 TensorFlow 预训练好的模型. 1.环境配置 点此查看 C/C++ 接口的编译 2. 导入预定义的图和训练好的参数值 // set up your input paths const string pathToGraph = "/ho

深度双向Transformer预训练【BERT第一作者分享】

目录 NLP中的预训练 语境表示 语境表示相关研究 存在的问题 BERT的解决方案 任务一:Masked LM 任务二:预测下一句 BERT 输入表示 模型结构--Transformer编码器 Transformer vs. LSTM 模型细节 在不同任务上进行微调 GLUE SQuAD 1.1 SQuAD 2.0 SWAG 分析 预训练的影响 方向与训练时间的影响 模型规模的影响 遮罩策略的影响 多语言BERT(机器翻译) 生成训练数据(机器阅读理解) 常见问题 结论 翻译自Jacob Dev

开源一个安全帽佩戴检测数据集及预训练模型

本文开源了一个安全帽佩戴检测数据集及预训练模型,该项目已上传至github,点此链接,感觉有帮助的话请点star .同时简要介绍下实践上如何完成一个端到端的目标检测任务.可以看下效果图: 同时该模型也可以做人头检测,效果如下: 一.背景介绍 最近几年深度学习的发展让很多计算机视觉任务落地成为可能,这些任务渗透到了各行各业,比如工业安全,包含的任务如安全帽佩戴检测.高空坠物检测.异常事故检测(行人跌倒不起等)等等,这里以安全帽检测为例,简单介绍下如何完成一个端到端的任务,包括: 1. 业务场景分析

自己训练卷积模型实现猫狗

原数据集:包含 25000张猫狗图像,两个类别各有12500 新数据集:猫.狗 (照片大小不一样) 训练集:各1000个样本 验证集:各500个样本 测试集:各500个样本 # 将图像复制到训练.验证和测试的目录 import os,shutil orginal_dataset_dir = 'kaggle_original_data/train' base_dir = 'cats_and_dogs_small' os.mkdir(base_dir)#保存新数据集的目录 train_dir = o

CNN基础二:使用预训练网络提取图像特征

上一节中,我们采用了一个自定义的网络结构,从头开始训练猫狗大战分类器,最终在使用图像增强的方式下得到了82%的验证准确率.但是,想要将深度学习应用于小型图像数据集,通常不会贸然采用复杂网络并且从头开始训练(training from scratch),因为训练代价高,且很难避免过拟合问题.相对的,通常会采用一种更高效的方法--使用预训练网络. 预训练网络的使用通常有两种方式,一种是利用预训练网络简单提取图像的特征,之后可能会利用这些特征进行其他操作(比如和文本信息结合以用于image capti

卷积神经网络入门(1) 识别猫狗

一下来自知乎 按照我的理解,CNN的核心其实就是卷积核的作用,只要明白了这个问题,其余的就都是数学坑了(当然,相比较而言之后的数学坑更难). 如果学过数字图像处理,对于卷积核的作用应该不陌生,比如你做一个最简单的方向滤波器,那就是一个二维卷积核,这个核其实就是一个模板,利用这个模板再通过卷积计算的定义就可以计算出一幅新的图像,新的图像会把这个卷积核所体现的特征突出显示出来.比如这个卷积核可以侦测水平纹理,那卷积出来的图就是原图水平纹理的图像. 现在假设要做一个图像的分类问题,比如辨别一个图像里是

猫狗图像识别

这里,我们介绍的是一个猫狗图像识别的一个任务.数据可以从kaggle网站上下载.其中包含了25000张毛和狗的图像(每个类别各12500张).在小样本中进行尝试 我们下面先尝试在一个小数据上进行训练,首先创建三个子集:每个类别各1000个样本的训练集.每个类别各500个样本的验证集和每个类别各500个样本的测试集. import os, shutil original_dataset_dir = '/media/erphm/DATA/kaggle猫狗识别/train'    # 原始文解压目录b

pytorch实现kaggle猫狗识别

参考:https://blog.csdn.net/weixin_37813036/article/details/90718310 kaggle是一个为开发商和数据科学家提供举办机器学习竞赛.托管数据库.编写和分享代码的平台,在这上面有非常多的好项目.好资源可供机器学习.深度学习爱好者学习之用.碰巧最近入门了一门非常的深度学习框架:pytorch(如果你对pytorch不甚了解,请点击这里),所以今天我和大家一起用pytorch实现一个图像识别领域的入门项目:猫狗图像识别.深度学习的基础就是数据

使用LAP数据集进行年龄训练及估计

一.背景 原本是打算按<DEX Deep EXpectation of apparent age from a single image>进行表面年龄的训练,可由于IMDB-WIKI的数据集比较庞大,各个年龄段分布不均匀,难以划分训练集及验证集.后来为了先跑通整个训练过程的主要部分,就直接用LAP数据集,参考caffe的finetune_flickr_style,进行一些参数修改,利用bvlc_reference_caffenet.caffemodel完成年龄估计的finetune. 二.训练