Abstract
In this chapter, we provide references to some of the most useful resources that could provide practitioners a quick start for learning and implementing a variety of deep learning models, kernel functions, Fisher vector encodings and feature condensation techniques. Not only can the users benefit from the open source codes, a rich collection of benchmark data sets and tutorials can provide them all the details to get hands on experience of the techniques discussed in this book. We have shared comparative analysis of the resources in tabular form so that users could pick the tools keeping in view their programming expertise, software/hardware dependencies and productivity goals.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/
Lecun, Y.: USPS dataset. http://www.cad.zju.edu.cn/home/dengcai/Data/MLData.html
Netzer, Y., Wang, T., Coates, A., et al.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning, vol. 2011 (2011). http://ufldl.stanford.edu/housenumbers/
Nene, S., Nayar, S., Murase, H.: Columbia object image library (COIL-20) (1996). http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php
Nene, S., Nayar, S., Murase, H.: Columbia object image library (COIL-100). http://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php
Coates, A., Lee, H., Ng, A.: An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 215–223 (2011). https://cs.stanford.edu/~acoates/stl10/
Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR dataset (2014). https://www.cs.toronto.edu/~kriz/cifar.html
Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. Comput. Vision Image Underst., 59–70 (2007). http://www.vision.caltech.edu/Image_Datasets/Caltech101/
Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset (2007). http://www.vision.caltech.edu/Image_Datasets/Caltech256/
Marlin, B., Swersky, K., et al.: Inductive principles for restricted boltzmann machine learning. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. pp. 509–516 (2010). https://people.cs.umass.edu/~marlin/data.shtml
Everingham, M., Gool, L., Williams, C., et al.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vision 88, 303–338 (2010). http://host.robots.ox.ac.uk/pascal/VOC/
Deng, J., Dong, W., Socher, R., et al.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. IEEE (2009). http://www.image-net.org/
Computer Laboratory Cambridge University: The ORL database of faces. http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
Graham, D., Allinson, N., et al.: Characterising virtual eigensignatures for general purpose face recognition. In: Face Recognition, pp. 446–456. Springer (1998). https://cs.nyu.edu/~roweis/data.html
Martinez, A., Benavente, R.: The AR face database, 1998. Comput. Vision Cent. Technical Report 3 (2007). http://www2.ece.ohio-state.edu/~aleix/ARdatabase
Peer, P.: CVL face database. Computer Vision Lab, Faculty of Computer and Information Science, University of Ljubljana, Slovenia (2005). http://www.lrv.fri.uni-lj.si/facedb.html
Gross, R., Matthews, I., Cohn, J., et al.: The CMU multi-pose, illumination, and expression (Multi-PIE) face database. CMU Robotics Institute. TR-07-08, Technical Report (2007). http://www.cs.cmu.edu/afs/cs/project/PIE/MultiPie/Multi-Pie/Home.html
Goh, R., Liu, L., Liu, X.: The CMU face in action (FIA) database. In: International Workshop on Analysis and Modeling of Faces and Gestures, pp. 255–263. Springer (2005). https://www.flintbox.com/public/project/5486/
Phillips, P., Wechsler, H., Huang, J., Rauss, P.: The FERET database and evaluation procedure for face-recognition algorithms. Image Vision Comput. 16, 295–306 (1998). https://www.nist.gov/itl/iad/image-group/color-feret-database
Georghiades, A., Belhumeur, P., Kriegman’s, D.: The yale face database. http://vision.ucsd.edu/datasets/yale_face_dataset_original/yalefaces.zip
Georghiades, A., Belhumeur, P., Kriegman’s, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach Intell. 23, 643–660 (2001). http://vision.ucsd.edu/~iskwak/ExtYaleDatabase/ExtYaleB.html
Lang, K.: The 20 newsgroups dataset. http://qwone.com/~jason/20Newsgroups/
Lewis, D.: Reuters-21578 dataset. http://www.daviddlewis.com/resources/testcollections/reuters21578/
Maas, A., Daly, R., Pham, P., et al.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of The Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA, Association for Computational Linguistics, June 2011, pp. 142–150 (2011). http://www.aclweb.org/anthology/P11-1015
Fan, R., Chang, K., Hsieh, C., Wang, X., Lin, C.: LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)
Joachims, T.: SVMlight: support vector machine. 19(4) (1999). http://svmlight.joachims.org/
Djuric, N., Lan, L., Vucetic, S.: BudgetedSVM: a toolbox for scalable SVM approximations. J. Mach. Learn. Res., 3813–3817 (2013). https://sourceforge.net/p/budgetedsvm/code/ci/master/tree/matlab/
Mangasarian, O., Wild, E.: Proximal support vector machine classifiers. In: Proceedings KDD-2001: Knowledge Discovery and Data Mining, pp. 77–86 (2001). http://research.cs.wisc.edu/dmi/svm/psvm/
Hsieh, C., Si, S., Dhillon, I.: A Divide-and-conquer solver for kernel support vector machines. In: International Conference on Machine Learning (2014). http://www.cs.utexas.edu/~cjhsieh/dcsvm/
Suykens, J., Pelckmans, K.: Least squares support vector machines. Neural Process. Lett., 293–300 (1999). https://www.esat.kuleuven.be/sista/lssvmlab/
Rakotomamonjy, A., Canu, S.: SVM and kernel methods MATLAB toolbox (2008). http://asi.insa-rouen.fr/enseignants/~arakoto/toolbox/
Franc, V., Hlavac, V.: Statistical pattern recognition toolbox for MATLAB. Prague, Czech: Center for Machine Perception, Czech Technical University (2004). https://cmp.felk.cvut.cz/cmp/software/stprtool/
Weston, J., Elisseeff, A., Bak, G.: Spider SVM toolbox (2006). http://people.kyb.tuebingen.mpg.de/spider/
Hsu, C.W., Lin, C.J.: BSVM-2.06 (2009). https://www.csie.ntu.edu.tw/~cjlin/bsvm/
Ruping, S.: Mysvm–a support vector machine (2004). http://www-ai.cs.uni-dortmund.de/SOFTWARE/MYSVM/index.html
Bottou, L., Bordes, A., Ertekin, S.: Lasvm (2009). http://leon.bottou.org/projects/lasvm#introduction
III, H.D.: SVMseq documentation. http://legacydirs.umiacs.umd.edu/~hal/SVMsequel/
Collobert, R., Bengio, S.: SVMTorch: support vector machines for large-scale regression problems. J. Mach. Learn. Res. (2001). http://bengio.abracadoudou.com/SVMTorch.html
Chang, C., Lin, C.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol., 27:1–27:27 (2011). http://www.csie.ntu.edu.tw/~cjlin/libsvm
Wen, Z., Shi, J., He, B., et al.: ThunderSVM: a fast SVM library on GPUs and CPUs. https://github.com/zeyiwen/thundersvm
Carpenter, A.: CUSVM: a CUDA implementation of support vector classification and regression, pp. 1–9 (2009). http://patternsonascreen.net/cuSVM.html
Cotter, A., Srebro, N., Keshet, J.: A GPU-tailored appproach for training kernelized SVM. In: Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 805–813 (2011). http://ttic.uchicago.edu/~cotter/projects/gtsvm/
Serafini, T., Zanni, L., Zanghirati, G.: Parallel GPDT: a parallel gradient projection-based decomposition technique for support vector machines (2004). http://dm.unife.it/gpdt/
Lopes, N., Ribeiro, B.: GPUMLib: a new library to combine machine learning algorithms with graphics processing units. In: 2010 10th International Conference on Hybrid Intelligent Systems (HIS), pp. 229–232 (2010). https://sourceforge.net/projects/gpumlib/?source=typ_redirect
Wang, Z., Chu, T., Choate, L., et al.: Rgtsvm: support vector machines on a GPUin R. ArXiv Preprint ArXiv:1706.05544 (2017). https://github.com/Danko-Lab/Rgtsvm
Vedaldi, A., Fulkerson, B.: VLFeat: an open and portable library of computer vision algorithms. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1469–1472 (2010). http://www.vlfeat.org/install$-$matlab.html
Jegou, H., Douze, M.: The yael library. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 687–690 (2014). https://gforge.inria.fr/projects/yael/
Maaten, L.: Fisher kernel learning. https://lvdmaaten.github.io/fisher/Fisher_Kernel_Learning.html
Kolacek, J., Zelinka, J.: Kernel smoothing in MATLAB: theory and practice of kernel smoothing (2012). http://www.math.muni.cz/english/science-and-research/developed-software/232-matlab-toolbox.html
Sonnenburg, S., Ratsch, G., Henschel, S.: J. Mach. Learn. Res., y.n.: The SHOGUN Machine Learning Toolbox
Allauzen, C., Mohri, M., Rostamizadeh, A.: Openkernel library (2007). http://www.openkernel.org/twiki/bin/view/Kernel/WebHome
Orabona, F.: DOGMA: A MATLAB toolbox for online learning (2009). http://dogma.sourceforge.net
Sun, Z., Ampornpunt, N., Varma, M., Vishwanathan, S.: Multiple kernel learning and the SMO algorithm. In: Advances in Neural Information Processing Systems (2010). http://manikvarma.org/code/SMO-MKL/download.html
Gonen, M., Alpaydin, E.: Multiple kernel learning algorithms. J. Mach. Learn. Res. (2011). https://users.ics.aalto.fi/gonen/jmlr11.php
Tsai, M.H.: LIBLINEAR MKL: a fast multiple kernel learning L1/L2-loss SVM solver in MATLAB. https://www.csie.ntu.edu.tw/~b95028/software/liblinear-mkl/
Varma, M., Babu, R.: More generality in efficient multiple kernel learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1065–1072 (2009). http://manikvarma.org/code/GMKL/download.html
Gonen, M., Alpaydn, E.: Localized algorithms for multiple kernel learning. Pattern Recognit. (2013). https://users.ics.aalto.fi/gonen/icpr10.php
Strobl, E., Visweswaran, S.: Deep multiple kernel learning. In: 2013 12th International Conference on Machine Learning and Applications (ICMLA) (2013). https://github.com/ericstrobl/deepMKL
Shawe-Taylor, J.: Kernel methods for pattern analysis (2004). https://www.kernel-methods.net/matlab_tools
Chen, M.: Pattern recognition and machine learning toolbox. MATLAB Central File Exchange (2016). https://www.mathworks.com/matlabcentra/fileexchange/55826-pattern-recognition-and-machine-learning-toolbox
Salakhutdinov, R., Hinton, G.: Deep Boltzmann machines. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, vol. 5, pp. 448–455 (2009). http://www.cs.toronto.edu/~rsalakhu/DBM.html
Rasmusbergpalm: Restricted Boltzmann Machine. https://code.google.com/archive/p/matrbm/
Salakhutdinov, R., Hinton, G.: Restricted Boltzmann machines for collaborative filtering. In: Proceedings of the 24th International Conference on Machine Learning, pp. 791–798 (2007). http://www.cs.toronto.edu/~rsalakhu/rbm_ais.html/
Gallamine, W.: Deep belief network. https://github.com/gallamine/DBN
Demuth, H., Beale, M.: Neural Network Toolbox for Use with Matlab–User’s Guide verion 3.0. (1993). https://www.mathworks.com/help/nnet/getting-started-with-neural-network-toolbox.html
Srivastava, N.: DeepNet: a library of deep learning algorithms. http://www.cs.toronto.edu/~nitish/deepnet
Krizhevsky, A.: Cuda-convnet: High-performance C++/Cuda implementation of convolutional neural networks (2012). https://code.google.com/archive/p/cuda-convnet2/
Abadi, M., Barham, P., et al.: TensorFlow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283 (2016). https://www.tensorflow.org/
Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch. In: Workshop on Machine Learning Open Source Software, NIPS, vol. 113 (2008). http://torch.ch/
Seide, F.: Keynote: the computer science behind the Microsoft cognitive toolkit: an open source large-scale deep learning toolkit for windows and linux. In: IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pp. xi–xi (2017). https://www.microsoft.com/en-us/cognitive-toolkit/
Bergstra, J., Bastien, F., et al.: Theano: deep learning on GPUS with python. In: NIPS 2011, Big Learning Workshop, Granada, Spain, vol. 3, pp. 1–48 (2011). http://deeplearning.net/software/theano/
Jia, Y., Shelhamer, E., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678 (2014) http://caffe.berkeleyvision.org/
Chollet, F.: Keras (2015). https://keras.io/
Chen, T., Li, M., Li, Y., et al.: Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. ArXiv Preprint ArXiv:1512.01274 (2015). https://mxnet.apache.org/
Gibson, A., Nicholson, C., Patterson, J.: Deeplearning4j: open-source distributed deep learning for the JVM. Apache Softw. Found. License 2 (2016). https://deeplearning4j.org/
Tokui, S., Oono, K., Hido, S.: Chainer: a next-generation open source framework for deep learning. In: Proceedings of Workshop on Machine Learning Systems in The Twenty-Ninth Annual Conference on Neural Information Processing Systems (NIPS) (2015). https://chainer.org/
Neon, N.: Nervana systems. https://neon.nervanasys.com/index.html/
Ye, C., Zhao, C., Yang, Y., Fermlle, C.: Lightnet: a versatile, standalone matlab-based environment for deep learning. In: Proceedings of the 2016 ACM on Multimedia Conference, pp. 1156–1159 (2016). https://github.com/yechengxi/LightNet
Chin, B., Lee, K., Wang, S., et al.: SINGA: a distributed deep learning platform. In: Proceedings of the 23rd ACM International Conference on Multimedia. pp. 685–688 (2015). https://singa.incubator.apache.org/en/index.html
Yan, K.: Feature selection toolbox. https://www.mathworks.com/matlabcentral/fileexchange/56723-yan-prtools
Duin, R.P.W.: Prtools Version 3.0: a matlab toolbox for pattern recognition. In: Proceedings of the SPIE (2000). http://prtools.org/software/
Somol, P., Vacha, P., Mikes, S., et al.: Introduction to feature selection toolbox 3–the C++ library for subset search, data modeling and classification. Research Report for Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic (2010). http://fst.utia.cz/?fst3
Lab, S.S.H.: Maximum likelihood feature selection (MLFS), University of Tokyo. http://www.ms.k.u-tokyo.ac.jp/software.html#MLFS
Kanamori, T., Sugiyama, M.: A least-squares approach to direct importance estimation. J. Mach. Learn. Res. 10, 1391–1445 (2009). http://www.ms.k.u-tokyo.ac.jp/software.html#LSFS
Jitkrittum, W., Sugiyama, M.: Feature selection Via L1-penalized squared-loss mutual information. IEICE Trans. Inf. Syst. 96, 1513–1524 (2013). http://wittawat.com/pages/l1lsmi.html
Roffo, G.: Feature selection library (MATLAB Toolbox). ArXiv Preprint ArXiv:1607.01327 (2016). https://www.mathworks.com/matlabcentral/fileexchange/56937-feature-selection-library
Maaten, L.: Matlab toolbox for dimensionality reduction (2007). https://lvdmaaten.github.io/drtoolbox
Salakhutdinov, R., Hinton, G.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006). http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html
Maaten, L.: Learning a parametric embedding by preserving local structure. In: Artificial Intelligence and Statistics, pp. 384–391 (2009). https://lvdmaaten.github.io/tsne/
He, X., Cai, D., et al.: Neighborhood preserving embedding. In: Tenth IEEE International Conference on Computer Vision, vol. 2, pp. 1208–1213 (2005). http://www.cad.zju.edu.cn/home/dengcai/Data/DimensionReduction.html
Cai, D., He, X., Zhou, K., Han, J., Bao, H.: Locality sensitive discriminant analysis. In: International Joint Conference on Artificial Intelligence (2007). http://www.cad.zju.edu.cn/home/dengcai/Data/DimensionReduction.html
Cai, D., He, X., Han, J.: Semi-supervised discriminant analysis. In: Proceedings of International Conference on Computer Vision (2007). http://www.cad.zju.edu.cn/home/dengcai/Data/DimensionReduction.html
He, X., Cai, D., Han, J.: Learning a maximum margin subspace for image retrieval. IEEE Trans. Knowl. Data Eng. 20 (2008). http://www.cad.zju.edu.cn/home/dengcai/Data/DimensionReduction.html
Suzuki, T., Sugiyama, M.: Sufficient dimension reduction via squared-loss mutual information estimation, pp. 804–811 (2010). http://www.ms.k.u-tokyo.ac.jp/software.html#LSDR
Sugiyama, M., Ide, T., et al.: Semi-supervised local fisher discriminant analysis for dimensionality reduction. Mach. Learn. 78, 35 (2010). http://www.ms.k.u-tokyo.ac.jp/software.html#SELF
Sugiyama, M.: Local fisher discriminant analysis for supervised dimensionality reduction. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 905–912. ACM (2006). http://www.ms.k.u-tokyo.ac.jp/software.html#LFDA
LI, W.: Learning to hashing. https://cs.nju.edu.cn/lwj/L2H.html
Jegou, H., Douze, M., Schmid, C.: Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 117–128 (2011). http://people.rennes.inria.fr/Herve.Jegou/projects/ann.html
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 The Author(s), under exclusive licence to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Azim, T., Ahmed, S. (2018). Open Source Knowledge Base for Machine Learning Practitioners. In: Composing Fisher Kernels from Deep Neural Models. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-98524-4_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-98524-4_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98523-7
Online ISBN: 978-3-319-98524-4
eBook Packages: Computer ScienceComputer Science (R0)