Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Kernel semi-supervised graph embedding model for multimodal and mixmodal data

  • 29 Accesses

This is a preview of subscription content, log in to check access.

References

  1. 1

    Zhang Q, Chu T G. Semi-supervised discriminant analysis based on sparse-coding theory. In: Proceedings of the 35th Chinese Control Conference, 2016. 7082–7087

  2. 2

    Zhang Q, Chu T G. Learning in multimodal and mixmodal data: locality preserving discriminant analysis with kernel and sparse representation techniques. Multimed Tools Appl, 2017, 76: 15465–15489

  3. 3

    Liu Y, Liao S Z. Kernel selection with spectral perturbation stability of kernel matrix. Sci China Inf Sci, 2014, 57: 112103

  4. 4

    Tao J W, Chung F L, Wang S T. A kernel learning framework for domain adaptation learning. Sci China Inf Sci, 2012, 55: 1983–2007

  5. 5

    Mehrkanoon S, Suykens J A K. Scalable semi-supervised kernel spectral learning using random fourier features. In: Proceedings of IEEE Symposium Series on Computational Intelligence, 2017

  6. 6

    Zhang A, Gao X W. Data-dependent kernel sparsity preserving projection and its application for semi-supervised classification. Multimed Tools Appl, 2018, 77: 24459–24475

  7. 7

    Schölkopf B, Smola A J. Learning with Kernels. Cambridge: MIT Press, 2002

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 61673027, 61503375) and Fundamental Research Funds for the Central Universities (Grant Nos. CXTD10-05, 18QD18 in UIBE, DUT19LK18).

Author information

Correspondence to Tianguang Chu.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Li, R. & Chu, T. Kernel semi-supervised graph embedding model for multimodal and mixmodal data. Sci. China Inf. Sci. 63, 119204 (2020). https://doi.org/10.1007/s11432-018-9535-9

Download citation