Code

For my current research I use Torch7, which I am one of the core contributors.

All the code that I wrote during my PhD for different papers are actually part of a single library that we have developed at CBLL. It is written in Lush, an object oriented Lisp derivative language with an integrated compiler that converts Lush code to C code and generates very efficient native code. In order to be able to run our code, you need to install Lush on your system.

Data

We generally use grayscale Berkeley Natural Images for demonstrating unsupervised learning. We provide 50K randomly extracted 40×40 patches here.

EbLearn

EbLearn is a machine learning library that is suitable for doing Energy-Based Learning. It contains common modules for doing supervised and unsupervised learning with online stochastic gradient descent. This library can be downloaded here. There is a README file in the root folder that explains how to use this library in Lush.

Packages

We provide the EbLearn library and data (for some experiments) already packaged with these individual archives, so you do not need to worry. The following is a list of papers that we provide some code for. Every experiment can be executed the same way. To run the code, simply enter the following line at Lush prompt.

^L run

(yes, with the caret symbol) All the input parameters for a particular algorithm are contained in a text file named “input” in the package. One can change the parameters in that file to experiment with different setups. It is practical to change the experiment number parameter (exper-nr) when experimenting with different setups.

Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition

Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun, “Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition”, Computational and Biological Learning Lab, Courant Institute, NYU, 2008.
Arxiv PDF DjVu Poster

bibtex

@techreport { koray-psd-08,
 title = "Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition",
 author = "Kavukcuoglu, Koray and Ranzato, Marc'Aurelio and LeCun, Yann",
 institution = "Computational and Biological Learning Lab, Courant Institute, NYU",
 number = "CBLL-TR-2008-12-01",
 year = "2008",
 }

We provide code for the Predictive Sparse Decomposition (PSD) algorithm explained in this paper. Data and input configuration for training PSD with 256 code units is also included in the package.
Download (65MB), Download (without data, 83KB)

Learning Invariant Features through Topographic Filter Maps

Koray Kavukcuoglu, Marc'Aurelio Ranzato, Rob Fergus, and Yann LeCun, “Learning Invariant Features through Topographic Filter Maps”, in Proc. International Conference on Computer Vision and Pattern Recognition (CVPR’09), 2009.
PDF DjVu

bibtex

@inproceedings { koray-cvpr-09,
 title = "Learning Invariant Features through Topographic Filter Maps",
 author = "Kavukcuoglu, Koray and Ranzato, Marc\'Aurelio and Fergus, Rob and LeCun, Yann",
 booktitle = "Proc. International Conference on Computer Vision and Pattern Recognition (CVPR'09)",
 publisher = "IEEE",
 year = "2009",
}

We provide code for Invariant Predictive Sparse Decomposition (IPSD) algorithm explained in this paper. Data and input configuration file for training IPSD with 400 code units in 20 by 20 topographic map is also included in the package.
Download (65MB), Download (without data, 84KB)

What is the Best Multi-Stage Architecture for Object Recognition?

Kevin Jarrett, Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun, “What is the Best Multi-Stage Architecture for Object Recognition?”, in Proc. International Conference on Computer Vision (ICCV’09), 2009.
PDF DjVu

bibtex

@inproceedings { jarrett-iccv-09,
 title = "What is the Best Multi-Stage Architecture for Object Recognition?",
 author = "Jarrett, Kevin and Kavukcuoglu, Koray and Ranzato, Marc'Aurelio and LeCun, Yann",
 booktitle = "Proc. International Conference on Computer Vision (ICCV'09)",
 publisher = "IEEE",
 year = "2009",
}

For this paper, we provide a limited MATLAB package that only does feature extraction using random convolutional filters followed by a linear logistic regression classifier training. Note that, this source package demonstrates only one of the cases of Table 1 in the paper (RR).
Download