Rademacher Complexity Analysis for Matrixized and Vectorized Classifier

  • Zhe Wang
  • Wenbo Jie
  • Daqi Gao
  • Jin Xu
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 124)

Abstract

It was empirically shown that the matrixized classifier design is superior to the vectorized one in terms of classification performance. However, it has not been demonstrated for the superiority of the matrixized classifier in terms of theory. To this end, this manuscript analyzes the general risk bounds for both the matrixized and vectorized classifiers. Here, we adopt the risk bound composed of the Rademacher complexity. Therefore, we investigate the Rademacher complexity of both matrixized and vectorized classifiers. Since the solution space of the matrixized classifier function is contained in that of the vectorized one, it can be proven that the Rademacher complexity of the matrixized classifier is less than that of the vectorized one. As a result, the general risk bound of the matrixized classifier is tighter than that of the vectorized one. Further, we compute the empirical Rademacher complexity for both the matrixized and vectorized classifiers and give a discussion.

Keywords

Discriminant Function General Risk Pattern Representation Gradient Descent Technique Margin Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bartlett, P., Mendelson, S.: Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research 3, 463–482 (2002)MathSciNetGoogle Scholar
  2. 2.
    Beymer, D., Poggio, T.: Image representations for visual learning. Science 272, 1905–1909 (1996)CrossRefGoogle Scholar
  3. 3.
    Chen, S., Wang, Z., Tian, Y.: Matrix-pattern-oriented ho-kashyap classifier with regularization learning. Pattern Recognition 40, 1533–1543 (2007)CrossRefMATHGoogle Scholar
  4. 4.
    Duda, R., Hart, R., Stock, D.: Pattern Classification, 2nd edn. Wiley (2001)Google Scholar
  5. 5.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  6. 6.
    Koltchinskii, V.: Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory 47(5), 1902–1914 (2001)CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Leski, J.: Kernel ho-kashyap classifier with generalization control. Int. J. Appl. Math. Comput. Sci. 14(1), 53–61 (2004)MATHMathSciNetGoogle Scholar
  8. 8.
    Wang, H., Ahuja, N.: Rank-r approximation of tensors: Using image-as-matrix representation. In: IEEE Conference on Computer Vision and Pattern Recognition (2005)Google Scholar
  9. 9.
    Wang, Z., Chen, S., Liu, J., Zhang, D.: Pattern representation in feature extraction and classifier design: Matrix versus vector. IEEE Transactions on Neural Networks 19(5), 758–769 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Zhe Wang
    • 1
  • Wenbo Jie
    • 1
  • Daqi Gao
    • 1
  • Jin Xu
    • 1
  1. 1.Department of Computer Science and EngineeringEast China University of Science and TechnologyShanghaiP.R. China

Personalised recommendations