On the use of genuine-impostor statistical information for score fusion in multimodal biometrics/sur l’usage de l’information statistioue client-imposteur pour la fusion des scores en biométrie multimodale

Sur l’usage de l’information statistique client-imposteur pour la fusion des scores en biométrie multimodale


Matching score level fusion techniques in multimodal person verification conventionally use global score statistics in the normalization and fusion stages. In this paper, novel normalization and fusion methods are presented to take advantage of the separate statistics of the monomodal scores in order to reduce the genuine and impostor pdf lobe overlapping and improve the verification rate. Joint mean normalization is an affine transformation that normalizes the mean of the monomodal biometrics scores separately for the genuine and impostor individuals. Histogram equalization is used to align the statistical distribution of the monomodal scores and make the whole separate statistics comparable. The presented weighting fusion methods have been designed to minimize the variances of the separate multimodal statistics and reduce the multimodal pdf lobe overlapping. The results obtained in speech and face scores fusion upon polycost and xm2vts databases show that the proposed techniques provide better results than the conventional methods.


Les techniques de fusion au niveau des degrés de pertinence dans la vérification multimodale de personnes utilisent conventionnellement des statistiques globales de perticence pour les étapes de normalisation et de fusion. Dans le présent article, de nouvelles méthodes de normalisation et de fusion sont présentées pour profiter des statistiques séparées des pertinences monomodales en vue de réduire la superposition des densités de probabilité de client et d’imposteur et d’améliorer le taux de vérification. La normalisation conjointe de la moyenne est une transformation affine qui normalise la moyenne des qualifications biométriques monomodales séparément pour les individus client et imposteur. L’égalisation de l’histogramme est utilisée pour aligner la distribution statistique des pertinences monomodales et peut rendre comparables les statistiques complètes séparées. Les présentes méthodes de fusion avec pondération on été conçues de façon à minimiser les variances des statistiques multimodales séparées et réduire la superposition des densités de probabilité multimodales. Les résultats obtenus dans la fusion de pertinences pour voix et visage avec les bases de données polycost et xm2vts démontrent que la normalisation proposée et les techniques de fusion produisent de meilleurs résultats que les méthodes conventionnelles.

This is a preview of subscription content, access via your institution.


  1. [1]

    Furui S., Recent advances in speaker recognition, In Bigün J., Chollet G., and Borgefors G., editors, Audio- and Video-based Biometric Person Authentication, vol. 1206 of Lecture Notes in Computer Science, pp. 237–252, Springer-Verlag, Heidelberg, Germany, 1997.

    Google Scholar 

  2. [2]

    Turk M., Pentland A., Eigenfaces for recognition, Journal of Cognitive Neuro Science, 3(1), pp. 71–68, 1991.

    Google Scholar 

  3. [3]

    Lee H. C., Gaensslen R. E., Advances in Fingerprint Technology, SCCRS Press, Boca Raton, FL, 1994.

  4. [4]

    Wildes R. P., Iris recognition: An emerging biometric technology, Proceedings of the Ieee, 85(9), pp. 1348–1363, September 1997.

    Google Scholar 

  5. [5]

    Zunkel R., Hand geometry based authentication, In Jain A. K., Bolle R. M., and Pankante S., editors, Biometrics: Personal Identification in Networked Society, pp. 87–102. Kluwer Academic Press, Boston, MA, 1999.

    Google Scholar 

  6. [6]

    Bolle R. M., Connell J. H., Pankanti S., Ratha N. K., Senior A. W., Guide to Biometrics, Springer-Verlag, New York, Inc. 2004.

    Google Scholar 

  7. [7]

    Indovina M., Uludag U., Snelick R., Mink A., Jain A., Multimodal Biometrie Authentication Methods: A cots Approach, Proc. MMUA 2003, Workshop on Multimodal User Authentication, pp. 99–106, Santa Barbara, ca, Dec. 11–12, 2003.

    Google Scholar 

  8. [8]

    Wang Y., Wang Y, Tan T., Combining Fingerprint and Voiceprint Biometrics for Identity Verification: an Experimental Comparison, Proc. ICBA 2004, pp. 663–670, Hong Kong, China, July 2004.

    Google Scholar 

  9. [9]

    Fox N. A., Gross R., Chazal P., Cohn J. F., Reilly R. B., Person Identification Using Automatic Integration of Speech, Lip, and Face Experts, Proc. of the ACM SIGMM 2003 Multimedia Biometrics Methods and Applications Workshop (WBMA’03), Berkeley, ca., pp. 25–32, Nov. 8 2003.

    Chapter  Google Scholar 

  10. [10]

    Lucey S., Chen T., Improved audio-visual speaker recognition via the Use of a hybrid combination strategy, The 4th International Conference on Audio- and Video- Based Biometrie Person Authentication AVBPA’03’, Guildford, U.K., June 2003.

    Google Scholar 

  11. [11]

    Auckenthaler R., Carey M., Lloyd-Thomas H., Score Normalization for Text-Independent Speaker Verification Systems Digital Signal Processing, 10, n° 1, pp. 42–54, 2000.

    Google Scholar 

  12. [12]

    Jain A., Fundamentals of Digital Image Processing, Prentice-Hall, 1986, pp 241–243.

    Google Scholar 

  13. [13]

    Bishop C., Neural Networks for Pattern Recognition, Oxford University Press, Oxford, 1995.

    Google Scholar 

  14. [14]

    Cristianini N., Shawe-taylor J., An Introduction to Support Vector Machines and other kernel-based learning methods, Cambridge University Press, 2000.

    Google Scholar 

  15. [15]

    Daugman J., Biometrie Decision Landscapes, Technical Report 482, The Computer Laboratory, University of Cambridge, 1999.

    Google Scholar 

  16. [16]

    Ejarque P., Hernando J., Variance reduction by using separate genuine- impostor statistics in multimodal biometrics, Proc of INTERSPEECH 2005, pp. 785–788, Lisbon, Portugal, September 2005.

    Google Scholar 

  17. [17]

    HPilger F., Ney H., Quantile based histogram equalization for noise robust speech recognition, Proc. of EUROSPEEC, 2001, pp. 1135–1138, Aalborg (DINAMARCA), September 2001.

    Google Scholar 

  18. [18]

    Balchandran R., Mammone R., Non parametric estimation and correction of non-linear distortion in speech systems, Proc. IEEE Int. Conf. Acoust. Speech Signal Proc., 1998.

    Google Scholar 

  19. [19]

    Leon-garcia A., Probability and Random Processes for Electrical Engineering (2nd Edition), Addison-Wesley Pub Co., July, 1993.

    Google Scholar 

  20. [20]

    Hernando J., Nadeu C., Speaker Verification on the pOLYCOST Database Using Frequency Filtered Spectral Energies, Proceedings of the icslp, 1998.

    Google Scholar 

  21. [21]

    Tefas A., Zafeiriou S., Pitas I. Discriminant NMFfaces for frontal face verification, Proc. of IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2005), Mystic, Connecticut, September 28–30, 2005.

    Google Scholar 

  22. [22]

    Lüttin, J., Maître G., Evaluation Protocol for the Extended M2VTS Database (XM2VTSDB), IDIAP Communication 98–05 (1998), Martigny, Switzerland.

    Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Pascual Ejarque.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Ejarque, P., Garde, A., Anguita, J. et al. On the use of genuine-impostor statistical information for score fusion in multimodal biometrics/sur l’usage de l’information statistioue client-imposteur pour la fusion des scores en biométrie multimodale. Ann. Telecommun. 62, 109–129 (2007). https://doi.org/10.1007/BF03253252

Download citation

Key words

  • Biometrics
  • Comparative study
  • Mixed method
  • Statistical method
  • Histogram
  • Experimental study
  • Speaker recognition
  • Image recognition
  • Face
  • Data fusion.

Mots clés

  • Biométrie
  • Étude comparative
  • Méthode mixte
  • Méthode statistique
  • Histogramme
  • Étude expérimentale
  • Reconnaissance locuteur
  • Reconnaissance image
  • Visage
  • Fusion d’informations.