Advertisement

Weight-and-Universum-based semi-supervised multi-view learning machine

  • 53 Accesses

Abstract

Semi-supervised multi-view learning machine is developed to process the corresponding semi-supervised multi-view data sets which consist of labeled and unlabeled instances. But in real-world applications, for a multi-view data set, only few instances are labeled with the limitation of manpower and cost. As a result, few prior knowledge which is necessary for the designing of a learning machine is provided. Moreover, in practice, different views and features play diverse discriminant roles while traditional learning machines treat these roles equally and assign the same weight just for convenience. In order to solve these problems, we introduce Universum learning to obtain more prior knowledge and assign different weights for views and features to reflect their diverse discriminant roles. The proposed learning machine is named as weight-and-Universum-based semi-supervised multi-view learning machine (WUSM). In WUSM, we first obtain weights of views and features. Then, we construct Universum set to obtain more prior knowledge on the basis of these weights. Different from traditional construction ways, the used construction way makes full use of the information of all labeled and unlabeled instances rather than only a pair of positive and negative training instances. Finally, we design the machine with the usage of the Universum set along with original data set. Our contributions are given as follows. (1) With the usage of all (labeled, unlabeled) instances of the data set, the Universum set provides more useful prior knowledge. (2) WUSM considers the diversities of views and features. (3) WUSM advances the development of semi-supervised multi-view learning machines. Experiments on bipartite ranking, feature selection, dimensionality reduction, classification, clustering, etc. validate the advantages of WUSM and draw a conclusion that with the introduction of Universum learning, view weights, and feature weights, the performance of a semi-supervised multi-view learning machine is boosted.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Notes

  1. 1.

    Example in this figure is also given in Liu et al. (2016).

  2. 2.

    For multiple classes, it can be divided into several binary class problems and the solution is the combination of optimal results of those binary class problems.

  3. 3.

    http://archive.ics.uci.edu/ml/datasets/Multiple+Features.

  4. 4.

    http://archive.ics.uci.edu/ml/datasets/Reuters+RCV1+RCV2+Multilingual%2C+Multiview+Text+Categorization+Test+collection.

  5. 5.

    http://archive.ics.uci.edu/ml/datasets/Corel+Image+Features.

  6. 6.

    For example, a training data set consists of three classes, one has 100 instances, another has 120 instances, and the third has 140 instances, then \(N_\mathrm{e-max}=220\).

  7. 7.

    ROC: receiver operating characteristic.

  8. 8.

    For these evaluation metrics, \(\hbox {precision}=\frac{\hbox {TP}}{\hbox {TP}+\hbox {FP}}\), \(\hbox {recall}=\frac{\hbox {TP}}{\hbox {TP}+\hbox {FN}}\), \(\hbox {specificity}=\frac{\hbox {TN}}{\hbox {TN}+\hbox {FP}}\), \(\hbox {accuracy}=\frac{\hbox {TP}+\hbox {TN}}{\hbox {TP}+\hbox {FP}+\hbox {TN}+\hbox {FN}}\), and \(\hbox {F-measure}=\frac{2\hbox {recall}\times \hbox {precision}}{\hbox {recall}+\hbox {precision}}\). Here, TN: true negative, TP: true positive, FP: false positive, FN: false negative.

  9. 9.

    Limited by the length of this paper, we only show the results about accuracy rather than precision, recall, specificity, and F-measure. But the results on other evaluation metrics won’t change our conclusions.

References

  1. Amini MR, Usunier N, Goutte C (2009) Learning from multiple partially observed views-an application to multilingual text categorization. In: Neural information processing systems (NIPS), pp 28–36

  2. Asuncion A, Newman D (2007) UCI machine learning repository. http://archive.ics.uci.edu/ml/. Accessed Sept 2019

  3. Barros RSMD, Hidalgo JIG, Cabral DRDL (2018) Wilcoxon rank sum test drift detector. Neurocomputing 275:1954–1963

  4. Bartlett P, Boucheron S, Lugosi G (2002) Model selection and error estimation. Mach Learn 48:85–113

  5. Chen SC, Zhang CS (2009) Selecting informative Universum sample for semi-supervised learning. In: International joint conferences on artificial intelligence, pp 1016–1021

  6. Chen SC, Wang Z, Tian YJ (2007) Matrix-pattern-oriented Ho–Kashyap classifier with regularization learning. Pattern Recognit 40(5):1533–1543

  7. Chen XH, Chen SC, Xue H (2012) Universum linear discriminant analysis. Electron Lett 48(22):1407–1409

  8. Chen XH, Yin HJ, Jiang F, Wang LP (2018) Multi-view dimensionality reduction based on Universum learning. Neurocomputing 275:2279–2286

  9. Cherkassky V, Dai W (2009) Empirical study of the Universum SVM learning for high-dimensional data. In: International conference on artificial neural networks, pp 932–941

  10. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7(1):1–30

  11. Deng MQ, Wang C, Chen QF (2016) Human gait recognition based on deterministic learning through multiple views fusion. Pattern Recognit Lett 78(C):56–63

  12. Dhar S (2014) Analysis and extensions of Universum learning. Dissertations and Theses, Grad works, University of Minnesota, ProQuest Dissertations Publishing

  13. Du YT, Li Q, Cai ZM, Guan XH (2013) Multi-view semi-supervised web image classification via co-graph. Neurocomputing 122:430–440

  14. Epshteyn A, DeJong G (2006) Generative prior knowledge for discriminative classification. AI Access Foundation 27(27):25–53

  15. Han C, Chen J, Wu QY, Mu S, Min HQ (2015) Sparse Markov chain-based semi-supervised multi-instance multi-label method for protein function prediction. J Bioinf Comput Biol 13(5):1543001

  16. Hou CP, Zhang CS, Wu Y, Nie FP (2010) Multiple view semi-supervised dimensionality reduction. Pattern Recognit 43(3):720–730

  17. http://multilingreuters.iit.nrc.ca/ReutersMultiLingualMultiView.htm

  18. Huang G, Song S, Gupta JND, Wu C (2014) Semi-supervised and unsupervised extreme learning machine. IEEE Trans Cybern 44(12):2405–2417

  19. Jiang Y, Liu J, Li ZC, Lu HQ (2014) Semi-supervised unified latent factor learning with multi-view data. Mach Vis Appl 25(7):1635–1645

  20. Koltchinskii V (2001) Rademacher penalties and structural risk minimization. IEEE Trans Inf Theory 47(5):1902–1914

  21. Koltchinskii V, Panchenko D (2000) Rademacher processes and bounding the risk of function learning. High Dimens Probab II:443–459

  22. Li DD, Zhu YJ, Wang Z, Chong CY, Gao DQ (2017) Regularized matrix-pattern-oriented classification machine with Universum. Neural Process Lett 45(3):1077–1098

  23. Liu CL, Hsaio WH, Lee CH, Gou FS (2014) Semi-supervised linear discriminant clustering. IEEE Trans Cybern 44(7):989–1000

  24. Liu DL, Tian YJ, Bie RF, Shi Y (2014) Self-Universum support vector machine. Pers Ubiquitous Comput 18:1813–1819

  25. Liu CL, Hsaio WH, Lee CH, Chang TH, Kuo TH (2016) Semi-supervised text classification with Universum learning. IEEE Trans Cybern 46(2):462–473

  26. Mendelson S (2002) Rademacher averages and phase transitions in Glivenko–Cantelli classes. IEEE Trans Inf Theory 48(1):251–263

  27. Nie F, Xu D, Li X, Xiang S (2011) Semi-supervised dimensionality reduction and classification through virtual label regression. IEEE Trans Syst Man Cybern B Cybern 41(3):675–685

  28. Peng B, Qian G, Ma YQ (2008) View-invariant pose recognition using multilinear analysis and the Universum. Adv Vis Comput 5359:581–591

  29. Schölkopf B, Shawe-Taylor J, Smola AJ, Gmd BS, Smola EJ, Williamson RC (1999) Generalization bounds via eigenvalues of the gram matrix. Technical report 99-035, NeuroColt

  30. Schölkopf B, Simard P, Smola AJ, Vapnik V (1997) Prior knowledge in support vector kernels. In: Neural information processing systems (NIPS), pp 640–646

  31. Seliya N, Khoshgoftaar TM (2007) Software quality analysis of unlabeled program modules with semisupervised clustering. IEEE Trans Syst Man Cybern A Syst Hum 37(2):201–211

  32. Sheikhpour R, Sarram MA, Gharaghani S, Chahooki MAZ (2017) A survey on semi-supervised feature selection methods. Pattern Recognit 64:141–158

  33. Shen C, Wang P, Shen F, Wang H (2012) UBoost: boosting with the Universum. IEEE Trans Pattern Anal Mach Intell 34(4):825–832

  34. Shi CJ, Ruan QQ, An GY, Ge C (2015) Semi-supervised sparse feature selection based on multi-view Laplacian regularization. Image Vis Comput 41:1–10

  35. Sinz F, Chapelle O, Agarwal A, Schölkopf B (2008) An analysis of inference with the Universum. In: Advances in neural information processing systems (NIPS2008), pp 1369–1376

  36. Sun SL, Zhang QJ (2011) Multiple-view multiple-learner semi-supervised learning. Neural Process Lett 34:229–240

  37. Tao H, Hou CP, Nie FP, Zhu JB, Yi DY (2017) Scalable multi-view semi-supervised classification via adaptive regression. IEEE Trans Image Process 26(9):4283–4296

  38. Tzortzis G, Likas A (2012) Kernel-based weighted multi-view clustering. In: 2012 IEEE 12th international conference on data mining, pp 675–684

  39. Usunier N, Amini MR, Goutte C (2011) Multiview semi-supervised learning for ranking multilingual documents. In: Gunopulos D, Hofmann T, Malerba D, Vazirgiannis M (eds) Machine learning and knowledge discovery in databases. ECML PKDD 2011. Lecture notes in computer science, vol 6913. Springer, Berlin, Heidelberg, pp 443–458

  40. Vapnik V, Chervonenkis A (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab Appl 16(2):264–280

  41. Vapnik V, Kotz S (1982) Estimation of dependences based on empirical data. Springer, Berlin

  42. Wang F (2011) Semisupervised metric learning by maximizing constraint margin. IEEE Trans Syst Man Cybern B Cybern 41(4):931–939

  43. Wang Z, Xu J, Chen SC, Gao DQ (2012) Regularized multi-view machine based on response surface technique. Neurocomputing 97:201–213

  44. Wang HY, Wang X, Zheng J, Deller JR, Peng HY, Zhu LQ, Chen WG, Li XL, Liu RJ, Bao HJ (2014) Video object matching across multiple non-overlapping camera views based on multi-feature fusion and incremental learning. Pattern Recognit 47(12):3841–3851

  45. Wang Z, Zhu YJ, Liu WW, Chen ZH, Gao DQ (2014) Multi-view learning with Universum. Knowl Based Syst 70:376–391

  46. Weston J, Collobert R, Sinz F, Bottou L, Vapnik V (2006) Inference with the Universum. In: The 23rd international conference on machine learning, pp 1009–1016

  47. Wu F, Jing XY, You XG, Yue D, Hu RM, Yang JY (2016) Multi-view low-rank dictionary learning for image classification. Pattern Recognit 50:143–154

  48. Xu XX, Li W, Xu D, Tsang IW (2016) Co-labeling for multi-view weakly labeled learning. IEEE Trans Pattern Anal Mach Intell 38(6):1113–1125

  49. Xu YM, Wang CD, Lai JH (2016) Weighted multi-view clustering with feature selection. Pattern Recognit 53:25–35

  50. Xu YT, Chen M, Yang ZJ, Li GH (2016) v-twin support vector machine with Universum data for classification. Appl Intell 44(4):956–968

  51. Yang ZK, Liu Z, Liu SY, Min L, Meng WT (2014) Adaptive multi-view selection for semi-supervised emotion recognition of posts in online student community. Neurocomputing 144:138–150

  52. Yu H, Wang XC, Wang GY (2017) A semi-supervised three-way clustering framework for multi-view data. In: International joint conference on rough sets, pp 313–325

  53. Zhang CH, Zheng WS (2017) Semi-supervised multi-view discrete hashing for fast image search. IEEE Trans Image Process 26(6):2604–2617

  54. Zhang D, Wang JD, Wang F, Zhang CS (2008) Semi-supervised classification with Universum. In: SIAM international conference on data mining, vol 2(4), pp 323–333

  55. Zhang D, Wang J, Si L (2011) Document clustering with Universum. In: International conference on research and development in information retrieval, pp 873–882

  56. Zhao J, Xu YT, Fujita H (2019) An improved non-parallel Universum support vector machine and its safe sample screening rule. Knowl Based Syst 170:79–88

  57. Zhu CM (2016) Improved multi-kernell classification machine with Nyström approximation technique and Universum data. Neurocomputing 175:610–634

  58. Zhu CM, Gao DQ (2015) Multiple matrix learning machine with five aspects of pattern information. Knowl Based Syst 83:13–31

  59. Zhu SH, Sun X, Jin DL (2016) Multi-view semi-supervised learning for image classification. Neurocomputing 208:136–142

Download references

Acknowledgements

This work is supported by Project funded by China Postdoctoral Science Foundation under Grant Number 2019M651576, National Natural Science Foundation of China (Grant Nos. 61602296, 61673301), Natural Science Foundation of Shanghai (Grant Nos. 16ZR1414500), National Key R&D Program of China (Grant No. 213), Major Project of Ministry of Public Security (Grant No. 20170004). Furthermore, this work is also sponsored by ‘Chenguang Program’ supported by Shanghai Education Development Foundation and Shanghai Municipal Education Commission under Grant Number 18CG54. The authors would like to thank their supports.

Author information

Correspondence to Duoqian Miao.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by V. Loia.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhu, C., Miao, D., Zhou, R. et al. Weight-and-Universum-based semi-supervised multi-view learning machine. Soft Comput (2019) doi:10.1007/s00500-019-04572-5

Download citation

Keywords

  • Semi-supervised learning
  • Multi-view learning
  • View weights
  • Feature weights
  • Universum learning