Caffe Layers Code Analysis-DataLayer

Caffe Layers Code Analysis-DataLayer

  • Caffe Layers Code Analysis-DataLayer

    • 代码分析

      • layerhpp

        • Layer层的三个主要的参数
        • Layer成员变量
        • 初始化函数SetUp
      • data_layerhpp
    • - 数据集dataset数据内容可以是多维数组也可以是更复杂的数据类型
    • Vision_layer

代码分析

与layer相关的头文件

common_layer.hpp
data_layer.hpp
loss_layer.hpp
neuron_layer.hpp
vision_layer.hpp
layer.hpp

layer.hpp

包含的头文件

#include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/layer_factory.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/math_functions.hpp"

math_function.hpp

#include "glog/logging.h"

#include "caffe/common.hpp"
#include "caffe/util/device_alternate.hpp"
#include "caffe/util/mkl_alternate.hpp"

device_alternate.hpp

通过#ifdef CPU_ONLY定义了一些宏用用于取消对GPU的调用

#define STUB_GPU(classname)
#define STUB_GPU_FORWARD(classname, funcname)
#define STUB_GPU_BACKWARD(classname, funcname)

Layer层的三个主要的参数

LayerParameter layer_param_;  // protobuf文件中存储的layer参数
vector<share_ptr<Blob<Dtype>>> blobs_; //这个存储的是layer的参数,在程序中用的
vector<bool> param_propagate_down_;   //这个bool表示是否计算各个blob参数的diff,即传播误差

尝试从protobuf文件读取参数

Layer成员变量

“`C++

/* The protobuf that stores the layer parameters /

// 层说明参数,从protocal buffers格式的网络结构说明文件中读取

LayerParameter layer_param_;

/* The phase: TRAIN or TEST /

// 层状态,参与网络的训练还是测试

Phase phase_;

/* The vector that stores the learnable parameters as a set of blobs. /

// 层权值和偏置参数,使用向量是因为权值参数和偏置是分开保存在两个blob中的

vector

“`C++

// 构造函数不需要重写,任何的初始化工作都是在SetUp()里完成

explicit Layer(const LayerParameter& param)

: layer_param_(param), is_shared_(false) {

// Set phase and copy blobs (if there are any).

phase_ = param.phase();

if (layer_param_.blobs_size() > 0) {

blobs_.resize(layer_param_.blobs_size());

for (int i = 0; i < layer_param_.blobs_size(); ++i) {

blobs_[i].reset(new Blob());

blobs_[i]->FromProto(layer_param_.blobs(i));

}

}

}

// 虚析构

// 基类的析构函数一般都为虚析构

//这样做是为了当用一个基类的指针删除一个派生类的对象时,派生类的析构函数会被调用。

//http://blog.csdn.net/starlee/article/details/619827

virtual ~Layer() {}

“`

初始化函数SetUp()

“`C++

/**

* @brief 实现每个layer对象的setup函数

* @param bottom

* 层的输入数据,blob中的存储空间已申请

* @param top

* 层的输出数据,blob对象以构造但是其中的存储空间未申请,

* 具体空间大小需根据bottom blob大小和layer_param_共同决定,具体在Reshape函数现实

    1. 检查输入输出blob个数是否满足要求,每个层能处理的输入输出数据不一样
    1. 调用LayerSetUp函数初始化特殊的层,每个Layer子类需重写这个函数完成定制的初始化
    1. 调用Reshape函数为top blob分配合适大小的存储空间
    1. 为每个top blob设置损失权重乘子,非LossLayer为的top blob其值为零

      *

  • 此方法非虚函数,不用重写,模式固定

    */

    void SetUp(const vector

data_reader.hpp
/**
 * @brief Reads data from a source to queues available to data layers.
 * A single reading thread is created per source, even if multiple solvers
 * are running in parallel, e.g. for multi-GPU training. This makes sure
 * databases are read sequentially, and that each solver accesses a different
 * subset of the database. Data is distributed to solvers in a round-robin
 * way to keep parallel training deterministic.
 */

负责数据读取,传送到data layer中。并且对于每一个source,都会开一个独立的reading thread读取线程,几十有多个solver在并行的跑。比如在多GPU训练的时候,可以保证对于数据库的读取是顺序的

data_layer.hpp

一句话功能总结:

DataLayer 用于将数据库上的内容,一个batch一个batch的读入到相对应的Blob中,

数据经过date layers进入Caffe的数据处理流程,他们位于网络Net最底层。数据可以来自高效的数据库(LevelDB或LMDB),直接来自内存,或者对效率不太关注时,可以来自HDF5格式的或常见图片格式的磁盘文件。Data Layers继承自Layer,继承关系如图所示

最终的子类层包括DataLayer,ImageDataLayer,WindowDataLayer,MemoryDataLayer,HDF5DataLayer,HDF5OutputLayer,DummyDataLayer

hdf5、leveldb、lmdb,确实是与具体数据相关了。data_layer作为原始数据的输入层,处于整个网络的最底层,它可以从数据库leveldb、lmdb中读取数据,也可以直接从内存中读取,还可以从hdf5,甚至是原始的图像读入数据。

关于这几个数据库,简介如下:

  • LevelDB是Google公司搞的一个高性能的key/value存储库,调用简单,数据是被Snappy压缩,据说效率很多,可以减少磁盘I/O,具体例子可以看看维基百科。
  • LMDB(Lightning Memory-Mapped Database),是个和levelDB类似的key/value存储库,但效果似乎更好些,其首页上写道“ultra-fast,ultra-compact”,这个有待进一步学习啊~~
  • HDF(Hierarchical Data Format)是一种为存储和处理大容量科学数据而设计的文件格式及相应的库文件,当前最流行的版本是HDF5,其文件包含两种基本数据对象:
    • 群组(group):类似文件夹,可以包含多个数据集或下级群组;

    - 数据集(dataset):数据内容,可以是多维数组,也可以是更复杂的数据类型。

    以上内容来自维基百科

其中所有函数的第二部分是相同的,都是一个目标blob,而输入根据输入的情况可以有所选择,可以是blob,也可以是opencv的mat 结构,或者proto中定义的datum结构。

#### LayerSetUp

LayerSetup的实现如下

从父类BasePrefetchingDataLayer继承LayerSetup
```C++
// base_data_layer.cpp
void BasePrefetchingDataLayer<Dtype>::LayerSetUp(
    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
    //调用父类的BaseDataLayer的构造方法
  BaseDataLayer<Dtype>::LayerSetUp(bottom, top);
  // Before starting the prefetch thread, we make cpu_data and gpu_data
  // calls so that the prefetch thread does not accidentally make simultaneous
  // cudaMalloc calls when the main thread is running. In some GPUs this
  // seems to cause failures if we do not so.
  //2 访问预取数据空间,这里是为了提前分配预取数据的存储空间
  for (int i = 0; i < PREFETCH_COUNT; ++i) {
    prefetch_[i].data_.mutable_cpu_data();
    if (this->output_labels_) {
      prefetch_[i].label_.mutable_cpu_data();
    }
  }

<div class="se-preview-section-delimiter"></div>

#ifndef CPU_ONLY
  if (Caffe::mode() == Caffe::GPU) {
    for (int i = 0; i < PREFETCH_COUNT; ++i) {
      prefetch_[i].data_.mutable_gpu_data();
      if (this->output_labels_) {
        prefetch_[i].label_.mutable_gpu_data();
      }
    }
  }

<div class="se-preview-section-delimiter"></div>

#endif
// 3 创建用于预期数据的线程
  DLOG(INFO) << "Initializing prefetch";
  this->data_transformer_->InitRand();
  StartInternalThread();
  DLOG(INFO) << "Prefetch initialized.";
}

<div class="se-preview-section-delimiter"></div>

在具体流程上

  1. 调用父类BaseDataLayer构造方法
//base_data_layer.cpp
void BaseDataLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  if (top.size() == 1) {
    output_labels_ = false;
  } else {
    output_labels_ = true;
  }
  data_transformer_.reset(
      new DataTransformer<Dtype>(transform_param_, this->phase_));
  data_transformer_->InitRand();
  // The subclasses should setup the size of bottom and top
  DataLayerSetUp(bottom, top);
}

<div class="se-preview-section-delimiter"></div>

2 根据top blob的个数判断是否输出数据的label,对output_labels_ 赋值,接下来调用自己的DataLayerSetUp方法

template <typename Dtype>
void DataLayer<Dtype>::DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  const int batch_size = this->layer_param_.data_param().batch_size();
  // Read a data point, and use it to initialize the top blob.
  // 读取一个数据对象, 用于分析数据对象的存储空间大小,用于初始化top blob
  Datum& datum = *(reader_.full().peek());

  // Use data_transformer to infer the expected blob shape from datum.
  // 使用data_transformer 计算期望的shape
  vector<int> top_shape = this->data_transformer_->InferBlobShape(datum);
  this->transformed_data_.Reshape(top_shape);
  // Reshape top[0] and prefetch_data according to the batch_size.
  top_shape[0] = batch_size;
  top[0]->Reshape(top_shape);
  for (int i = 0; i < this->PREFETCH_COUNT; ++i) {
    this->prefetch_[i].data_.Reshape(top_shape);
  }
  LOG(INFO) << "output data size: " << top[0]->num() << ","
      << top[0]->channels() << "," << top[0]->height() << ","
      << top[0]->width();
  // label
  if (this->output_labels_) {
    vector<int> label_shape(1, batch_size);
    top[1]->Reshape(label_shape);
    for (int i = 0; i < this->PREFETCH_COUNT; ++i) {
      this->prefetch_[i].label_.Reshape(label_shape);
    }
  }
}

<div class="se-preview-section-delimiter"></div>

完成层的初始化工作

最后两句宏函数

INSTANTIATE_CLASS(DataLayer);
REGISTER_LAYER_CLASS(Data);

<div class="se-preview-section-delimiter"></div>
// ------ in common.hpp ------
// Instantiate a class with float and double specifications.
#define INSTANTIATE_CLASS(classname) \
  char gInstantiationGuard##classname; \
  template class classname<float>;   template class classname<double>
// ------ in common.hpp ------

// ------ in layer_factory.hpp ------
#define REGISTER_LAYER_CREATOR(type, creator)                                  \
  static LayerRegisterer<float> g_creator_f_##type(#type, creator<float>);     \
  static LayerRegisterer<double> g_creator_d_##type(#type, creator<double>)    \

#define REGISTER_LAYER_CLASS(type)                                             \
  template <typename Dtype>                                                      shared_ptr<Layer<Dtype> > Creator_##type##Layer(const LayerParameter& param) \
  {                                                                                return shared_ptr<Layer<Dtype> >(new type##Layer<Dtype>(param));           \
  }                                                                              REGISTER_LAYER_CREATOR(type, Creator_##type##Layer)
// ------ in layer_factory.hpp ------

其中,INSTANTIATE_CLASS(DataLayer)被用来实例化DataLayer的类模板,REGISTER_LAYER_CLASS(Data)被用来向layer_factory注册DataLayer的构造方法,方便直接通过层的名称(Data)直接获取层的对象。Caffe中内置的层在实现的码的最后都会加上这两个宏。

Vision_layer

vision层包含很多跟视觉相关的操作,包括卷积,反卷积,池化等等

通常接收“图像”作为输入,输出结果也是“图像”。这里的“图像”可以是真实世界的单通道灰度图像,或RGB彩色图像, 或多通道2D矩阵

BaseConvolutionLayer

其继承自Layer,是一个卷积以及反卷积操作的基类

ConvonlutionLayer

继承了BaseConvolutionLayer,主要作用就是将一张image做卷积操作,使用学到的filter的参数和biaes。同时在Caffe里面,卷积操作做了优化,变成了一个矩阵相乘的操作。其中有两个比较主要的函数是im2col以及col2im

传统卷积图解

矩阵相乘版本

在一个卷积层中将卷积操作展开的具体操作过程,按照卷积核的大小取数据然后展开,在同一张图里的不同卷积核选取的逐行摆放,不同N的话,就在同一行后面继续拼接,不同个可以是多个通道,但是需要注意的是同一行里面每一段都应该对应的是原图中中一个位置的卷积窗口。

卷积操作中的group的概念

groups是代表filter 组的个数。引入gruop主要是为了选择性的连接卷基层的输入端和输出端的channels,否则参数会太多。每一个group 和 1/ group的input 通道和 1/group 的output通道进行卷积操作。比如有4个input, 8个output,那么1-4属于第一group,5-8属于第二个gruop

重写了Forward_cpu和Backward_cpu

void ConvolutionLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  const Dtype* weight = this->blobs_[0]->cpu_data();
  for (int i = 0; i < bottom.size(); ++i) {
    const Dtype* bottom_data = bottom[i]->cpu_data();
    Dtype* top_data = top[i]->mutable_cpu_data();
    for (int n = 0; n < this->num_; ++n) {
      this->forward_cpu_gemm(bottom_data + n * this->bottom_dim_, weight,
          top_data + n * this->top_dim_);
      if (this->bias_term_) {
        const Dtype* bias = this->blobs_[1]->cpu_data();
        this->forward_cpu_bias(top_data + n * this->top_dim_, bias);
      }
    }
  }
}

可以看到其实这里面他调用了forward_cpu_gemm,而这个函数内部又调用了math_function里面的caffe_cpu_gemm的通用矩阵相乘接口,GEMM的全称是General Matrix Matrix Multiply。其基本形式如下:

C=alpha?op(A)?op(B)+beta?C

void ConvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
      // 反向传播梯度误差
  const Dtype* weight = this->blobs_[0]->cpu_data();
  Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();
  for (int i = 0; i < top.size(); ++i) {
    const Dtype* top_diff = top[i]->cpu_diff();
    const Dtype* bottom_data = bottom[i]->cpu_data();
    Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();
    // Bias gradient, if necessary.
    // 如果有bias项,计算bias导数
    if (this->bias_term_ && this->param_propagate_down_[1]) {
      Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();
      for (int n = 0; n < this->num_; ++n) {
        this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);
      }
    }
    // 计算weight
    if (this->param_propagate_down_[0] || propagate_down[i]) {
      for (int n = 0; n < this->num_; ++n) {
        // gradient w.r.t. weight. Note that we will accumulate diffs.
        // 计算weights权重的梯度
        if (this->param_propagate_down_[0]) {
          this->weight_cpu_gemm(bottom_data + n * this->bottom_dim_,
              top_diff + n * this->top_dim_, weight_diff);
        }
        // gradient w.r.t. bottom data, if necessary.
        // 计算bottom数据的梯度,向后传递
        if (propagate_down[i]) {
          this->backward_cpu_gemm(top_diff + n * this->top_dim_, weight,
              bottom_diff + n * this->bottom_dim_);
        }
      }
    }
  }
}
```C++
// ------ in common.hpp ------
// Instantiate a class with float and double specifications.
#define INSTANTIATE_CLASS(classname) \
  char gInstantiationGuard##classname; \
  template class classname<float>;   template class classname<double>
// ------ in common.hpp ------

// ------ in layer_factory.hpp ------
#define REGISTER_LAYER_CREATOR(type, creator)                                  \
  static LayerRegisterer<float> g_creator_f_##type(#type, creator<float>);     \
  static LayerRegisterer<double> g_creator_d_##type(#type, creator<double>)    \

#define REGISTER_LAYER_CLASS(type)                                             \
  template <typename Dtype>                                                      shared_ptr<Layer<Dtype> > Creator_##type##Layer(const LayerParameter& param) \
  {                                                                                return shared_ptr<Layer<Dtype> >(new type##Layer<Dtype>(param));           \
  }                                                                              REGISTER_LAYER_CREATOR(type, Creator_##type##Layer)
// ------ in layer_factory.hpp ------

其中,INSTANTIATE_CLASS(DataLayer)被用来实例化DataLayer的类模板,REGISTER_LAYER_CLASS(Data)被用来向layer_factory注册DataLayer的构造方法,方便直接通过层的名称(Data)直接获取层的对象。Caffe中内置的层在实现的码的最后都会加上这两个宏。

时间: 2024-10-16 12:58:16

Caffe Layers Code Analysis-DataLayer的相关文章

Memcached source code analysis -- Analysis of change of state--reference

This article mainly introduces the process of Memcached, libevent structure of the main thread and worker thread based on the processing of the connection state of mutual conversion (not involving data access operations), the main business logic is t

Memcached source code analysis (threading model)--reference

Look under the start memcahced threading process memcached multi-threaded mainly by instantiating multiple libevent, are a main thread and n workers thread is the main thread or workers thread all through the the libevent management network event, in

CEPH CRUSH 算法源码分析 原文CEPH CRUSH algorithm source code analysis

原文地址 CEPH CRUSH algorithm source code analysis http://www.shalandis.com/original/2016/05/19/CEPH-CRUSH-algorithm-source-code-analysis/ 文章比较深入的写了CRUSH算法的原理和过程.通过调试深入的介绍了CRUSH计算的过程.文章中添加了些内容. 写在前面 读本文前,你需要对ceph的基本操作,pool和CRUSH map非常熟悉.并且较深入的读过源码. 分析的方法

十四、详述 IntelliJ IDEA 提交代码前的 Code Analysis 机制

在我们用 IntelliJ IDEA 向 SVN 或者 Git 提交代码的时候,IntelliJ IDEA 提供了一个自动分析代码的功能,即Perform code analysis: 如上图所示,当我们勾选Perform code analysis之后,点击commit,IntelliJ IDEA 就会在提交代码之前对项目的代码进行分析检查,并将检查结果以错误和警告的形式展示出来: 如上图所示,这是Code Analysis的结果示例,为No errors and 6 warnings. 如果

Top 40 Static Code Analysis Tools

https://www.softwaretestinghelp.com/tools/top-40-static-code-analysis-tools/ In this article, I have summarised some of the top static code analysis tools. Can we ever imagine sitting back and manually reading each line of codes to find flaws? To eas

AOP spring source code analysis

例子 1 在使用 New 的情况下实现 AOP public class TraceTest { public static void main(String args[]) { TraceTest test = new TraceTest(); test.rpcCall(); } // 虽然 intellij 没有给出提示,但是这个 Trace 还是成功的 @Trace public void rpcCall() { System.out.println("call rpc"); }

lua sample code analysis

What is a meta table a meta table has a __name attr whose value is name of metatable a meta table is stored in LUA_REGISTRYINDEX whose key is its name Code analysis Appl DUMP_STACK(L); /*{ "foo", "C:\\jshe\\codes\\mylualib\\test\\../build/v

Golang Template source code analysis(Parse)

This blog was written at go 1.3.1 version. We know that we use template thought by followed way: func main() { name := "waynehu" tmpl := template.New("test") tmpl, err := tmpl.Parse("hello {{.}}") if err != nil { panic(err) }

msm8610 lcd driver code analysis

---恢复内容开始--- 1  lcd probe The probe sequence is determined by compilation sequence: mdss-mdp3-objs = mdp3.o mdp3_dma.o mdp3_ctrl.o #1 mdss-mdp3-objs += mdp3_ppp.o mdp3_ppp_hwio.o mdp3_ppp_data.o obj-$(CONFIG_FB_MSM_MDSS) += mdss-mdp3.o mdss-mdp-objs