Thursday, February 9, 2017

Some Good documents for CNN and RNN begineers


最近一直在想给你们不熟悉深度学习的学生推荐什么资料,因为刚刚开始有很多关键词不懂,今天找了一个很好的presentation, 给你们,感觉写得很好,可以认真看一下

https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/

其他相关的知识可以看看http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial 或中文的http://ufldl.stanford.edu/wiki/index.php/UFLDL教程


For RNN Colah's Blog:
 http://colah.github.io/posts/2015-08-Understanding-LSTMs/

Wednesday, March 30, 2016

Install dense trajectory

Step1: Install the opencv correctly


Ubuntu下编译安装OpenCV 2.4.7并读取摄像头

主要参考:
开发环境:VMware下Ubuntu+OpenCV2.4.7
安装过程:

The Installation Procedure

To install and configure OpenCV 2.4.1, complete the following steps. The commands shown in each step can be copy and pasted directly into a Linux command line.
  1. Remove any installed versions of ffmpeg and x264.
    sudo apt-get remove ffmpeg x264 libx264-dev

  2. Get all the dependencies for x264 and ffmpeg.
    sudo apt-get update
    sudo apt-get install build-essential checkinstall git cmake libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libva-dev libvdpau-dev libvorbis-dev libx11-dev libxfixes-dev libxvidcore-dev texi2html yasm zlib1g-dev

  3. Download and install gstreamer.
    sudo apt-get install libgstreamer0.10-0 libgstreamer0.10-dev gstreamer0.10-tools gstreamer0.10-plugins-base libgstreamer-plugins-base0.10-dev gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-plugins-bad gstreamer0.10-ffmpeg

  4. Download and install gtk.
    sudo apt-get install libgtk2.0-0 libgtk2.0-dev

  5. Download and install libjpeg.
    sudo apt-get install libjpeg8 libjpeg8-dev

  6. Create a directory to hold source code.
    cd ~
    mkdir src

  7. Download and install install x264.
    1. Download a recent stable snapshot of x264 from ftp://ftp.videolan.org/pub/videolan/x264/snapshots/. The exact version does not seem to matter. To write this guide, I used version x264-snapshot-20120528-2245-stable.tar.bz2, but I have used previous versions too.
      cd ~/src
      wget ftp://ftp.videolan.org/pub/videolan/x264/snapshots/x264-snapshot-20120528-2245-stable.tar.bz2
      tar xvf x264-snapshot-20120528-2245-stable.tar.bz2
      cd x264-snapshot-20120528-2245-stable
    2. Configure and build the x264 libraries.
      ./configure --enable-static
      make
      sudo make install
      IMPORTANT: If you are running a 64-bit version of Ubuntu, you must configure x264 as shown in the following command:
      ./configure --enable-shared --enable-pic
      The -shared and -pic options might also be required when you compile for some other architectures, such as ARM. You know you need these options if you get the following error when compiling OpenCV:
      [ 25%] Building CXX object modules/highgui/CMakeFiles/opencv_highgui.dir/src/bitstrm.cpp.o
      Linking CXX shared library ../../lib/libopencv_highgui.so
      /usr/bin/ld: /usr/local/lib/libavcodec.a(avpacket.o): relocation R_X86_64_32S against `av_destruct_packet' can not be used when making a shared object; recompile with -fPIC
      /usr/local/lib/libavcodec.a: could not read symbols: Bad value


  8. Download and install install ffmpeg.
    1. Download ffmpeg version 0.11.1 from http://ffmpeg.org/download.html.
      cd ~/src
      wget http://ffmpeg.org/releases/ffmpeg-0.11.1.tar.bz2
      tar xvf ffmpeg-0.11.1.tar.bz2
      cd ffmpeg-0.11.1
    2. Configure and build ffmpeg.
      ./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab
      make
      sudo make install
      IMPORTANT: Just like with x264 in the previous step, you must configure ffmpeg with the -shared and -pic options if you are running a 64-bit version of Ubuntu or some other architectures, such as ARM.
      ./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-shared --enable-pic

  9. Download and install install a recent version of v4l (video for linux) from http://www.linuxtv.org/downloads/v4l-utils/. For this guide I used version 0.8.8.
    cd ~/src
    wget http://www.linuxtv.org/downloads/v4l-utils/v4l-utils-0.8.8.tar.bz2
    tar xvf v4l-utils-0.8.8.tar.bz2
    cd v4l-utils-0.8.8
    make
    sudo make install

  10. Download and install install OpenCV 2.4.2.
    1. Download OpenCV version 2.4.2 from http://sourceforge.net/projects/opencvlibrary/files/
      cd ~/src
      wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.2/OpenCV-2.4.2.tar.bz2
      tar xvf OpenCV-2.4.2.tar.bz2
    2. Create a new build directory and run cmake:
      cd OpenCV-2.4.2/
      mkdir build
      cd build
      cmake -D CMAKE_BUILD_TYPE=RELEASE ..
    3. Verify that the output of cmake includes the following text:
      • found gstreamer-base-0.10
      • GTK+ 2.x: YES
      • FFMPEG: YES
      • GStreamer: YES
      • V4L/V4L2: Using libv4l
    4. Build and install OpenCV.
      make
      sudo make install

  11. Configure Linux.
    1. Tell linux where the shared libraries for OpenCV are located by entering the following shell command:
      export LD_LIBRARY_PATH=/usr/local/lib
      Add the command to your .bashrc file so that you don’t have to enter every time your start a new terminal. (注:我主要采用将export LD_LIBRARY_PATH=/usr/local/lib加入.bashrc file的方法,.bashrc file为隐藏文件,在主用户目录下,可用shell命令:ls -al查看)
      Alternatively, you can configure the system wide library search path. Using your favorite editor, add a single line containing the text /usr/local/lib to the end of a file named /etc/ld.so.conf.d/opencv.conf. In the standard Ubuntu install, the opencv.conf file does not exist; you need to create it. Using vi, for example, enter the following commands:
      sudo vi /etc/ld.so.conf.d/opencv.conf
      G
      o
      /usr/local/lib
      <Esc>
      :wq!
      After editing the opencv.conf file, enter the following command:
      sudo ldconfig /etc/ld.so.conf
      .
    2. Using your favorite editor, add the following two lines to the end of /etc/bash.bashrc:
      PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
      export PKG_CONFIG_PATH

After completing the previous steps, your system should be ready to compile code that uses the OpenCV libraries. The following example shows one way to compile code for OpenCV(这里最好先进行重启):
g++ `pkg-config opencv --cflags` DenseTrack.cpp  -o DenseTrack `pkg-config opencv --libs` 

Step2: Errors add the add the libippicv.a to lib file 
remember to configure the path of opencv

Friday, December 18, 2015

如何在caffe中增加layer以及caffe中triplet loss layer的实现

1.如何在caffe中增加新的layer

http://blog.csdn.net/tangwei2014/article/details/46812153


新版的caffe中增加新的layer,变得轻松多了,概括说来,分四步:
1)在./src/caffe/proto/caffe.proto 中增加 对应layer的paramter message;
2)在./include/caffe/***layers.hpp中增加该layer的类的声明,***表示有common_layers.hpp, data_layers.hpp, neuron_layers.hpp, vision_layers.hpp 和loss_layers.hpp等;
3)在./src/caffe/layers/目录下新建.cpp和.cu文件,进行类实现。
4)在./src/caffe/gtest/中增加layer的测试代码,对所写的layer前传和反传进行测试,测试还包括速度。
最后一步很多人省了,或者没意识到,但是为保证代码正确,建议还是严格进行测试,磨刀不误砍柴功。


2.caffe中实现triplet loss layer


1.caffe.proto中增加triplet loss layer的定义

首先在message LayerParameter中追加 optional TripletLossParameter triplet_loss_param = 138; 其中138是我目前LayerParameter message中现有元素的个数,具体是多少,可以看LayerParameter message上面注释中的:

Tuesday, December 15, 2015

caffe HDF5 data layer preperation

https://groups.google.com/forum/#!topic/caffe-users/HN1eaUPBKO4

https://github.com/BVLC/caffe/tree/master/matlab/hdf5creation


Tuesday, December 1, 2015

Developing new layers

https://github.com/BVLC/caffe/wiki/Development

Caffe define a new layer hands on

Here's roughly the process I follow.
  1. Add a class declaration for your layer to the appropriate one of common_layers.hpp,data_layers.hpploss_layers.hppneuron_layers.hpp, orvision_layers.hpp. Include an inline implementation of type and the *Blobs()methods to specify blob number requirements. Omit the *_gpu declarations if you'll only be implementing CPU code.
  2. Implement your layer in layers/your_layer.cpp.
    • SetUp for initialization: reading parameters, allocating buffers, etc.
    • Forward_cpu for the function your layer computes
    • Backward_cpu for its gradient
  3. (Optional) Implement the GPU versions Forward_gpu and Backward_gpu inlayers/your_layer.cu.
  4. Add your layer to proto/caffe.proto, updating the next available ID. Also declare parameters, if needed, in this file.
  5. Make your layer createable by adding it to layer_factory.cpp.
  6. Write tests in test/test_your_layer.cpp. Usetest/test_gradient_check_util.hpp to check that your Forward and Backward implementations are in numerical agreement.

Friday, September 4, 2015

useful dataset for cv

http://rogerioferis.com/VisualRecognitionAndSearch2014/Resources.html