Multiview Semi-supervised Learning
Semi-supervised learning is concerned with such learning scenarios where only a small portion of training data are labeled. In multiview settings, unlabeled data can be used to regularize the prediction functions, and thus to reduce the search space. In this chapter, we introduce two categories of multiview semi-supervised learning methods. The first one contains the co-training style methods, where the prediction functions from different views are trained through their own objective, and each prediction function is improved by the others. The second one contains the co-regularization style methods, where a single objective function exists for the prediction functions from different views to be trained simultaneously.
- Blum A, Mitchell T (1998) Combining labeled and unlabeled data with co-training. In: Proceedings of the 11th annual conference on computational learning theory, ACM, pp 92–100Google Scholar
- Blum A, Mansour Y (2017) Efficient co-training of linear separators under weak dependence. In: Proceedings of the 30th annual conference on learning theory, pp 302–318Google Scholar
- Nigam K, Ghani R (2000) Analyzing the effectiveness and applicability of co-training. In: Proceedings of the 9th international conference on information and knowledge management, ACM, pp 86–93Google Scholar
- Sindhwani V, Niyogi P, Belkin M (2005) A co-regularization approach to semi-supervised learning with multiple views. Proc ICML Work Learn Mult Views ACM 2005:74–79Google Scholar
- Sun S (2011) Multi-view laplacian support vector machines. In: Proceedings of the 7th international conference on advanced data mining and applications, Springer, pp 209–222Google Scholar
- Zhou ZH, Zhan DC, Yang Q (2007) Semi-supervised learning with very few labeled training examples. In: Proceedings of the 22nd AAAI national conference on artificial intelligence, AAAI, vol 1, pp 675–680Google Scholar