Local Learning Multiple Probabilistic Linear Discriminant Analysis

  • Yi YangEmail author
  • Jiasong Sun
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 528)


Probabilistic Linear Discriminant Analysis (PLDA) has delivered impressive results in some challenging tasks, e.g. face recognition and speaker recognition. Similar with the most state-of-the-art machine learning techniques, PLDA tries to globally learn the model parameters over the whole training set. However, those globally-learnt PLDA parameters can hardly characterize all relevant information, especially for those data sets whose underlying feature-spaces are heterogeneous and abound in complex manifolds. PLDA has the data homogeneous assumption which could be interpreted by involved parameters estimated through the entire training dataset. Such a global learning idea has been proven ineffective in the case of the heterogeneous data. In this paper, we alleviate this assumption by separating the feature space and locally learning multiple PLDA models of each space. Various standard datasets are performed and the superiority of the proposed method over the original PLDA could be found. We complete this work by assigning a probability to measure which models the test individual data match. This probabilistic scoring approach could further integrate different recognition technologies including other kinds of biological characteristics recognition. We propose the novel log likelihood score in recognition part includes three steps to complete.


Local learning Probabilistic linear discriminant analysis Clustering Bayesian method Fusion 



Thanks to NSFC (61105017) agency for funding.


  1. 1.
    Prince, S.J.D., Elder, J.H.: Probabilistic linear discriminant analysis for inferences about identity. In: 11th International Conference on Computer Vision 2007, ICCV 2007, pp. 1–8. IEEE, 14–21 October 2007Google Scholar
  2. 2.
    Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7, 179–188 (1936)CrossRefGoogle Scholar
  3. 3.
    Senoussaoui, M., Kenny, P., Brummer, N., Dumouchel, E.d.V.P.: Mixture of plda models in i-vector space for gender independent speaker recognition. In: Interspeech 2011, pp. 1–19. IEEE (2011)Google Scholar
  4. 4.
    Kumar, N., Andreou, A.G.: Heteroscedastic discriminant analysis and reduced rank HMMs for improved speech recognition. Speech Commun. 26, 283–297 (1998)CrossRefGoogle Scholar
  5. 5.
    He, X., Niyogi, P.: Locality preserving projections. In: Proceedings of Neural Information Processing Systems, vol. 16, Vancouver, British Columbia (2003)Google Scholar
  6. 6.
    Kim, T., Kittler, J.: Locally linear discriminant analysis for multimodally distributed classes for face recognition with a single model image. IEEE Trans. Pattern Anal. Mach. Intell. 27, 318–327 (2005)CrossRefGoogle Scholar
  7. 7.
    Liu, Y., Liu, Y., Chan, K.: Tensor-based locally maximum margin classifier for image and video classification. Comput. Vis. Image Underst. 115(3), 300–309 (2011)CrossRefGoogle Scholar
  8. 8.
    Mahanta, M., Aghaei, A., Plataniotis, K., Pasupathy, S.: Heteroscedastic linear feature extraction based on sufficiency conditions. Pattern Recognit. 45, 821–830 (2012)CrossRefzbMATHGoogle Scholar
  9. 9.
    Ilin, A., Raiko, T.: Practical approaches to principal component analysis in the presence of missing values. J. Mach. Learn. Res. 11, 1957–2000 (2011)MathSciNetGoogle Scholar
  10. 10.
    Halberstadt, A.: Heterogeneous acoustic measurements and multiple classifiers for speech recognition, Ph.D. thesis, MIT (1998)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Tsinghua National Laboratory for Information Science and Technology, Department of Electronic EngineeringTsinghua UniversityBeijingChina

Personalised recommendations