Advertisement

Microarray Design Using the Hilbert–Schmidt Independence Criterion

  • Justin Bedo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5265)

Abstract

This paper explores the design problem of selecting a small subset of clones from a large pool for creation of a microarray plate. A new kernel based unsupervised feature selection method using the Hilbert–Schmidt independence criterion (hsic) is presented and evaluated on three microarray datasets: the Alon colon cancer dataset, the van ’t Veer breast cancer dataset, and a multiclass cancer of unknown primary dataset. The experiments show that subsets selected by the hsic resulted in equivalent or better performance than supervised feature selection, with the added benefit that the subsets are not target specific.

Keywords

Feature Selection Feature Subset Linear Kernel Full Dataset Polynomial Kernel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Gretton, A., Bousquet, O., Smola, A., Schölkopf, B.: Measuring statistical dependence with hilbert-schmidt norms. In: Algorithmic Learning Theory: 16th International Conference (January 2005)Google Scholar
  2. 2.
    Cristianini, N., Shawe-Taylor, J.: On kernel-target alignment. Neural Information Processing Systems 14 (January 2002)Google Scholar
  3. 3.
    Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.: A kernel method for the two-sample-problem. Advances in Neural Information Processing Systems (January 2007)Google Scholar
  4. 4.
    Song, L., Bedo, J., Borgwardt, K., Gretton, A., Smola, A.: Gene selection via the bahsic family of algorithms. Bioinformatics (January 2007)Google Scholar
  5. 5.
    Song, L., Smola, A., Gretton, A., Borgwardt, K., Bedo, J.: Supervised feature selection via dependence estimation. In: Proceedings of the 24th international conference on Machine Learning (January 2007)Google Scholar
  6. 6.
    Alon, U., Barkai, N., Notterman, D., Gish, K., Ybarra, S., Mack, D., Levine, A.J.: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad Sci. USA (January 1999)Google Scholar
  7. 7.
    Ambroise, C., McLachlan, G.J.: Selection bias in gene extraction on the basis of microarray gene-expression data. Proc. Natl. Acad Sci. USA 99(10), 6562–6566 (2002)CrossRefPubMedPubMedCentralGoogle Scholar
  8. 8.
    Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Machine Learning 46, 389–422 (2002)CrossRefGoogle Scholar
  9. 9.
    Huang, T., Kecman, V.: Gene extraction for cancer diagnosis by support vector machines—an improvement. Artificial Intelligence In Medicine (January 2005)Google Scholar
  10. 10.
    van ’t Veer, L., Dai, H., van de Vijver, M.J., He, Y.D., Hart, A.A.M., Mao, M., Peterse, H.L., van der Kooy, K., Marton, M.J., Witteveen, A.T., Schreiber, G.J., Kerkhoven, R.M., Roberts, C., Linsley, P.S., Bernards, R., Friend, S.: Gene expression profiling predicts clinical outcome of breast cancer. Nature 415(6871), 530–536 (2002)CrossRefPubMedGoogle Scholar
  11. 11.
    Tothill, R.W., Kowalczyk, A., Rischin, D., Bousioutas, A., Haviv, I., van Laar, R.K., Waring, P.M., Zalcberg, J., Ward, R., Biankin, A., Sutherland, R.L., Henshall, S.M., Fong, K., Pollack, J.R., Bowtell, D., Holloway, A.J.: An expression-based site of origin diagnostic method designed for clinical application to cancer of unknown origin. Cancer Res. 65(10), 4031–4040 (2005)CrossRefPubMedGoogle Scholar
  12. 12.
    Berlinet, A., Thomas-Agnan, C.: Reproducing Kernel Hilbert Spaces in Probability and Statistics. Springer, Heidelberg (2003)Google Scholar
  13. 13.
    Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)Google Scholar
  14. 14.
    Guyon, I.: An introduction to variable and feature selection. JMLR 3, 1157–1182 (2003)Google Scholar
  15. 15.
    Bedo, J., Sanderson, C., Kowalczyk, A.: An efficient alternative to svm based recursive feature elimination with applications in natural language processing and bioinformatics. In: Proceedings of the Australian Joint Conference on Artifical Intelligence (2006)Google Scholar
  16. 16.
    Rifkin, R., Klautau, A.: In defense of one-vs-all classification. The Journal of Machine Learning Research (January 2004)Google Scholar
  17. 17.
    Efron, B.: How biased is the apparent error rate of a prediction rule? Journal of the American Statistical Association (January 1986)Google Scholar
  18. 18.
    Hanley, J.A., McNeil, B.J.: The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology 143(1), 29–36 (1982)CrossRefPubMedGoogle Scholar
  19. 19.
    Hand, D., Till, R.: A simple generalisation of the area under the roc curve for multiple class classification problems. Machine Learning (January 2001)Google Scholar
  20. 20.
    Gabriel, K.: The biplot graphic display of matrices with application to principal component analysis. Biometrika (January 1971)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Justin Bedo
    • 1
  1. 1.The Australian National University, NICTA, and the University of MelbourneAustralia

Personalised recommendations