Random Convolution Ensembles

  • Michael Mayo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4810)


A novel method for creating diverse ensembles of image classifiers is proposed. The idea is that, for each base image classifier in the ensemble, a random image transformation is generated and applied to all of the images in the labeled training set. The base classifiers are then learned using features extracted from these randomly transformed versions of the training data, and the result is a highly diverse ensemble of image classifiers. This approach is evaluated on a benchmark pedestrian detection dataset and shown to be effective.


Image Classification Random Convolution Pedestrian Detection 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13(3), 637–649 (1999)CrossRefGoogle Scholar
  2. 2.
    Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)zbMATHCrossRefGoogle Scholar
  3. 3.
    Atkeson, C., Moore, A., Schaal, S.: Locally Weighted Learning. AI Review 11, 11–73 (1996)Google Scholar
  4. 4.
    Pass, G., Zabih, R., Miller, J.: Comparing images using color coherence vectors. In: Aigrain, P., et al. (eds.) Proceedings of the 4th ACM international conference on Multimedia, pp. 65–73 (1997)Google Scholar
  5. 5.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2169–2178 (2006)Google Scholar
  6. 6.
    Fukushima, K., Miyake, S., Ito, T.: Neocognitron: A neural network model for the mechanism of visual pattern recognition. IEEE Trans. on Systems, Man, and Cybernetics 13, 826–834 (1983)Google Scholar
  7. 7.
    Munder, S., Gavrilla, D.: An experimental study on pedestrian classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(11), 1863–1868 (2006)CrossRefGoogle Scholar
  8. 8.
    Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  9. 9.
    Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proc. of the 13th International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  10. 10.
    Kittler, J., Hatef, M., Robert, P., Duin, W., Matas, J.: On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 226–239 (1998)CrossRefGoogle Scholar
  11. 11.
    Wolpert, D.: Stacked generalization. Neural networks 5, 241–259 (1992)CrossRefGoogle Scholar
  12. 12.
  13. 13.
    Seul, M., O’Gorman, L., Sammon, M.: Practical Algorithms for Image Analysis. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  14. 14.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Michael Mayo
    • 1
  1. 1.Dept. of Computer Science, University of Waikato, Private Bag 3105, HamiltonNew Zealand

Personalised recommendations