Machine learning and intelligence science: Sino-foreign interchange workshop IScIDE2010 (A)

Editorial
  • 243 Downloads

References

  1. 1.
    He X, Niyogi P. Locality preserving projections. In: Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press, 2003, 152–160Google Scholar
  2. 2.
    He X, Cai D, Niyogi P. Tensor subspace analysis. In: Proceedings of Advances in Neural Information Processing Systems. 2005, 18: 499–506Google Scholar
  3. 3.
    Kohonen T. Self-organized formation of topologically correct feature map. Biological Cybernetics, 1982, 43(1): 59–69MathSciNetMATHCrossRefGoogle Scholar
  4. 4.
    Kohonen T. Self-Organizing Maps. 2nd ed. Berlin: Springer, 1997MATHGoogle Scholar
  5. 5.
    Yin H. Data visualisation and manifold mapping using the ViSOM. Neural Networks, 2002, 15(8–9): 1005–1016CrossRefGoogle Scholar
  6. 6.
    Yin H. On multidimensional scaling and the embedding of self-organising maps. Neural Networks, 2008, 21(2–3): 160–169CrossRefGoogle Scholar
  7. 7.
    Oja E. Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1989, 1(1): 61–68MathSciNetCrossRefGoogle Scholar
  8. 8.
    Oja E, Ogawa H, Wangviwattana J. Learning in nonlinear constrained Hebbian networks. In: Proceedings of ICANN’91. 1991, 385–390Google Scholar
  9. 9.
    Xu L. Least MSE reconstruction for self-organization. In: Proceedings of IJCNN91-Singapore. 1991, 3: 2363–2373Google Scholar
  10. 10.
    Hastie T, Stuetzle W. Principal curves. Journal of the American Statistical Association, 1989, 84(406): 502–516MathSciNetMATHCrossRefGoogle Scholar
  11. 11.
    LeBlanc M, Tibshirani R J. Adaptive principal surfaces. Journal of the American Statistical Association, 1994, 89(425): 53–64MATHCrossRefGoogle Scholar
  12. 12.
    Scholkopf B, Smola A, Muller K R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1998, 10(5): 1299–1319CrossRefGoogle Scholar
  13. 13.
    Ham J, Lee D D, Mika S, Scholkopf B. A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the 21st International Conference on Machine Learning. 2004, 369–376Google Scholar
  14. 14.
    Xu L. Independent Subspaces. In: Ramón J, Dopico R, Dorado J, Pazos A, eds. Encyclopedia of Artificial Intelligence. Hershey(PA): IGI Global, 2008, 903–912Google Scholar
  15. 15.
    Yang J, Zhang D, Frangi A F, Yang J Y. Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1): 131–137CrossRefGoogle Scholar
  16. 16.
    Xu L, Krzyzak A, Suen C Y. Several methods for combining multiple classifiers and their applications in handwritten character recognition. IEEE Transactions on Systems, Man, and Cybernetics, 1992, 22: 418–435CrossRefGoogle Scholar
  17. 17.
    Kittler J, Hatef M, Duin R P W, Matas J. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(3): 226–239CrossRefGoogle Scholar
  18. 18.
    Hansen L K, Salamon P. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993–1001CrossRefGoogle Scholar
  19. 19.
    Xu L, Krzyzak A, Sun C Y. Associative switch for combining multiple classifiers. In: Proceedings of IJCNN91, Seattle, WA. 1991, (I): 43–48Google Scholar
  20. 20.
    Wolpert D H. Stacked generalization. Neural Networks, 1992, 5(2): 241–259CrossRefGoogle Scholar
  21. 21.
    Baxt WG. Improving the accuracy of an artificial neural network using multiple differently trained networks. Neural Computation, 1992, 4(5): 772–780CrossRefGoogle Scholar
  22. 22.
    Breiman L. Stacked Regression. Department of Statistics, Berkeley. 1992, TR-367Google Scholar
  23. 23.
    Jacobs R A, Jordan M I, Nowlan S J, Hinton G E. Adaptive mixtures of local experts. Neural Computation, 1991, 3(1): 79–87CrossRefGoogle Scholar
  24. 24.
    Jordan M I, Jacobs R A. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 1994, 6(2): 181–214CrossRefGoogle Scholar
  25. 25.
    Jordan M I, Xu L. Convergence Results for The EM Approach to Mixtures of Experts Architectures. Neural Networks, 1995, 8(9): 1409–1431CrossRefGoogle Scholar
  26. 26.
    Xu L, Jordan M I, Hinton G E. An Alternative Model for Mixtures of Experts. In: Cowan, Tesauro, Alspector, eds. Advances in Neural Information Processing Systems 7. MIT Press, 1995, 633–640Google Scholar
  27. 27.
    Xu L, Jordan M I, Hinton G E. A modified gating network for the mixtures of experts architecture, Proc. In: Proceedings of WCNN’94, San Diego, CA. 1994, (2): 405–410Google Scholar
  28. 28.
    Xu L, Jordan M I. EM learning on a generalized finite mixture model for combining multiple classifiers. In: Proceedings of WCNN’93. 1993, (IV): 227–230Google Scholar
  29. 29.
    Dempster A P, Laird N M, Rubin D B. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B. Methodological, 1977, 39(1): 1–38MathSciNetMATHGoogle Scholar
  30. 30.
    Redner R A, Walker H F. Mixture densities, maximum likelihood, and the EM algorithm. SIAM Review, 1984, 26(2): 195–239MathSciNetMATHCrossRefGoogle Scholar
  31. 31.
    Xu L, Amari S. Combining classifiers and learning mixture-of-experts. In: Dopico J R R, Dorado J, Pazos A, eds. Encyclopedia of Artificial Intelligence. 2009, 318–326Google Scholar
  32. 32.
    Lu B L, Ito M. Task decomposition and module combination based on class relations: a modular neural network for pattern classification. IEEE Transactions on Neural Networks, 1999, 10(5): 1244–1256CrossRefGoogle Scholar
  33. 33.
    Miller D J, Uyar H S. A mixture of experts classifier with learning based on both labelled and unlabelled data. In: Mozer M, Jordan M I, Petsche T, eds. Advances in Neural Information Processing Systems 9. Cambridge: MIT Press, 1997, 571–577Google Scholar
  34. 34.
    Xu L. Bayesian Ying Yang system and theory as a unified statistical learning approach: (I) Unsupervised and semi-unsupervised learning. In: Amari S, Kassabov N, eds. Brain-like Computing and Intelligent Information Systems. Springer-Verlag, 1997, 241–274Google Scholar
  35. 35.
    Zhou Z H, Wu J, Tang W. Ensembling neural networks: many could be better than all. Artificial Intelligence, 2002, 137(1–2): 239–263MathSciNetMATHCrossRefGoogle Scholar
  36. 36.
    Zhou Z H, Li M. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 2005, 17(11): 1529–1541CrossRefGoogle Scholar
  37. 37.
    Stockham T G, Cannon T M, Ingebretsen R B. Blind deconvolution through digital signal processing. Proceedings of the IEEE, 1975, 63(4): 678–692CrossRefGoogle Scholar
  38. 38.
    Kundur D, Hatzinakos D. Blind image deconvolution revisited. IEEE Signal Processing Magazine, 1996, 13(6): 61–63CrossRefGoogle Scholar
  39. 39.
    Xu L, Yan P F, Chang T. Semi-blind deconvolution of finite length sequence: (I) linear problem & (II) nonlinear problem. SCIENTIA SINICA, Series A, 1987, (12): 1318–1344MathSciNetGoogle Scholar
  40. 40.
    Xu L. Bayesian Ying-Yang system, best harmony learning, and five action circling. Frontiers of Electrical and Electronic Engineering in China, 2010, 5(3): 281–328CrossRefGoogle Scholar
  41. 41.
    Caianiello E R. Some remarks on organization and structure. Biological Cybernetics, 1977, 26(3): 151–158MATHCrossRefGoogle Scholar

Copyright information

© Higher Education Press and Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
  2. 2.Department of AutomationTsinghua UniversityBeijingChina

Personalised recommendations