Friday, December 18, 2015

如何在caffe中增加layer以及caffe中triplet loss layer的实现

1.如何在caffe中增加新的layer

http://blog.csdn.net/tangwei2014/article/details/46812153


新版的caffe中增加新的layer,变得轻松多了,概括说来,分四步:
1)在./src/caffe/proto/caffe.proto 中增加 对应layer的paramter message;
2)在./include/caffe/***layers.hpp中增加该layer的类的声明,***表示有common_layers.hpp, data_layers.hpp, neuron_layers.hpp, vision_layers.hpp 和loss_layers.hpp等;
3)在./src/caffe/layers/目录下新建.cpp和.cu文件,进行类实现。
4)在./src/caffe/gtest/中增加layer的测试代码,对所写的layer前传和反传进行测试,测试还包括速度。
最后一步很多人省了,或者没意识到,但是为保证代码正确,建议还是严格进行测试,磨刀不误砍柴功。


2.caffe中实现triplet loss layer


1.caffe.proto中增加triplet loss layer的定义

首先在message LayerParameter中追加 optional TripletLossParameter triplet_loss_param = 138; 其中138是我目前LayerParameter message中现有元素的个数,具体是多少,可以看LayerParameter message上面注释中的:

Tuesday, December 15, 2015

caffe HDF5 data layer preperation

https://groups.google.com/forum/#!topic/caffe-users/HN1eaUPBKO4

https://github.com/BVLC/caffe/tree/master/matlab/hdf5creation


Tuesday, December 1, 2015

Developing new layers

https://github.com/BVLC/caffe/wiki/Development

Caffe define a new layer hands on

Here's roughly the process I follow.
  1. Add a class declaration for your layer to the appropriate one of common_layers.hpp,data_layers.hpploss_layers.hppneuron_layers.hpp, orvision_layers.hpp. Include an inline implementation of type and the *Blobs()methods to specify blob number requirements. Omit the *_gpu declarations if you'll only be implementing CPU code.
  2. Implement your layer in layers/your_layer.cpp.
    • SetUp for initialization: reading parameters, allocating buffers, etc.
    • Forward_cpu for the function your layer computes
    • Backward_cpu for its gradient
  3. (Optional) Implement the GPU versions Forward_gpu and Backward_gpu inlayers/your_layer.cu.
  4. Add your layer to proto/caffe.proto, updating the next available ID. Also declare parameters, if needed, in this file.
  5. Make your layer createable by adding it to layer_factory.cpp.
  6. Write tests in test/test_your_layer.cpp. Usetest/test_gradient_check_util.hpp to check that your Forward and Backward implementations are in numerical agreement.