Multimedia Tools and Applications

, Volume 76, Issue 3, pp 3999–4034 | Cite as

Discriminant component analysis for privacy protection and visualization of big data

Article

Abstract

Big data has many divergent types of sources, from physical (sensor/IoT) to social and cyber (web) types, rendering it messy and, imprecise, and incomplete. Due to its quantitative (volume and velocity) and qualitative (variety) challenges, big data to the users resembles something like “the elephant to the blind men”. It is imperative to enact a major paradigm shift in data mining and learning tools so that information from diversified sources must be integrated together to unravel information hidden in the massive and messy big data, so that, metaphorically speaking, it would let the blind men “see” the elephant. This talk will address yet another vital “V”-paradigm: “Visualization”. Visualization tools are meant to supplement (instead of replace) the domain expertise (e.g. a cardiologist) and provide a big picture to help users formulate critical questions and subsequently postulate heuristic and insightful answers. For big data, the curse of high feature dimensionality is causing grave concerns on computational complexity and over-training. In this talk, we shall explore various projection methods for dimension reduction - a prelude to visualization of vectorial and non-vectorial data. A popular visualization tool for unsupervised learning is Principal Component Analysis (PCA). PCA aims at the best recoverability of the original data in the Euclidean Vector Space (EVS). However, PCA is not effective for supervised and collaborative learning environment. Discriminant Component Analysis (DCA), basically a supervised PCA, can be derived via a notion of Canonical Vector Space (CVS). The signal subspace components of DCA are associated with the discriminant distance/power (related to the classification effectiveness) while the noise subspace components of DCA are tightly coupled with the recoverability and/or privacy protection. DCA enjoys two major merits: First, because the rank of the signal subspace is limited by the number of classes, DCA can effectively support classification using a relatively small dimensionality (i.e. high compression). Second, in DCA, the eigenvalues of the noise-space are ordered according to their corresponding reconstruction errors and can thus be used to control recoverability or anti-recoverability by applying respectively an negative or positive ridge. Via DCA, individual data can be highly compressed before being uploaded to the cloud, and thus better enabling privacy protection. In many practical scenarios, additional privacy protection can be incorporated by allowing individual participants to selectively hide some personal features. The classification of masked data calls for a kernel approach to Incomplete Data Analysis (KAIDA). More specifically, we extend PCA/DCA to their kernel variants. The success of kernel machines hinges upon the kernel function adopted to characterize the similarity of pairs of partially-specified vectors. Simulations on the HAR dataset confirm that DCA far outperforms PCA, both in their conventional or kernelized variants. For the latter, the visualization/classification results suggest favorable performance by the proposed partial correlation kernels over the imputed RBF kernel. In addition, the visualization results further points to a potentially promising approach via multiple kernels such as combining an imputed Gaussian RBF kernel and a non-imputed partial correlation kernel.

Keywords

Big data Visualization Data matrix Vectorial and Non-vectorial data analysis Unsupervised learning Supervised learning Collaborative learning Dimension reduction Projection matrix Subspace analysis Net entropy Component analysis Discriminant power (DP) PCA (principal component analysis) MDA (multiple discriminant analysis) DCA (discriminant component analysis) EVS (Euclidean vector space) CVS (canonical vector space) Signal subspace Discriminant distance (DD) Noise subspace Recoverability Anti-recoverability Privacy protection Learning subspace property (LSP) Kernel machine KDCA Kernel approach ro Incomplete Data Analysis (KAIDA) 

References

  1. 1.
    Aizerman M, Braverman EA, Rozonoer L (1964) Theoretical foundation of the potential function method in pattern recognition learning. Automation Remote Control 25:821837Google Scholar
  2. 2.
    Auerbach D (2014) The big data paradox. SlateGoogle Scholar
  3. 3.
    De La Torre F, Kanade T (2006) Discriminative cluster analysis. In: International conference on machine learning. ACM Press, p 241Google Scholar
  4. 4.
    Duda RO, Hart PE (1973) Pattern classification and scene analysis. Wiley, New York. (See also “Classification,” Wiley, 2001.)MATHGoogle Scholar
  5. 5.
    Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugenics 7:179–188CrossRefGoogle Scholar
  6. 6.
    Golub G, Van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press , BattimoreMATHGoogle Scholar
  7. 7.
    Hoerl AE, Kennard RW (1970) Ridge regression: biased estimation for nonorthogonal problems, vol 12, pp 55–67Google Scholar
  8. 8.
    Hotelling H (1993) Analysis of a complex of statistical variables into principal components. J Educ Psychol 24(6):417–441CrossRefGoogle Scholar
  9. 9.
    Kung SY, Wu PY A partial-cosine kernel approach to incomplete data analysis. ABDA’14Google Scholar
  10. 10.
    Kung SY (2014) Kernel methods and machine learning. Cambridge University PressGoogle Scholar
  11. 11.
    Laney D (2001) 3D Data Management Controlling data volume, velocity, and variety. APPLICATION DELIVERY STRATEGIES is published by META Group Inc.Google Scholar
  12. 12.
    Liu B, Jiang Y, Sha F, Govindan R (2012) Cloud-enabled privacy-preserving collaborative learning for mobile sensing. In: ACM SenSys’12, Toronto. 978-1-4503-1169-4Google Scholar
  13. 13.
    Mayer-Schonberger V, Cukier K (2013) Big data: a revolution transform how we work, and think. Eamon DolanHoughton Mifflin HarcourtGoogle Scholar
  14. 14.
    Mercer J (1909) Functions of positive and negative type, and their connection with the theory of integral equations. Trans London Phil Soc A209:415V446Google Scholar
  15. 15.
    Okada T, Tomita S (1985) An optimal orthonormal system for discriminant analysis. Pattern Recogn 18(2):139144CrossRefGoogle Scholar
  16. 16.
    Parlett BN (1980) The symmetric eigenvalue problem. In: Prentice-hall series in computational mathematics, vol 07 632. Prentice-Hall, Inc, Englewood CliffsGoogle Scholar
  17. 17.
    Rao CR (1948) The utilization of multiple measurements in problems of biological classification. J R Stat Soc Ser B 10(2):159203MathSciNetGoogle Scholar
  18. 18.
    Rubin DB (1987) Multiple imputation for nonresponse in surveys. Wiley, HobokenCrossRefMATHGoogle Scholar
  19. 19.
    Scholkopf B, Smola AJ (2002) Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press, CambridgeGoogle Scholar
  20. 20.
    Scholkopf B, Smola AJ, Muller K-R (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10:1299–1319CrossRefGoogle Scholar
  21. 21.
    Tychonoff AN (1943) On the stability of inverse problems. Dokl Akad Nauk SSSR 39(5):195–198MathSciNetGoogle Scholar
  22. 22.
    Vapnik VN (1995) The nature of statistical learning theory. Springer, New YorkCrossRefMATHGoogle Scholar
  23. 23.
    Wang J, Yong X, Zhang D, You J, You J (2010) An efficient method for computing orthogonal discriminant vectors. ElsevierGoogle Scholar
  24. 24.
    Yu Y, McKelvey T, Kung SY (2013) A classification scheme for high-dimensional-small-sample-size data using SODA and ridge-SVM with medical applications. In: Proceedings international conference on acoustics, speech, and signal processingGoogle Scholar
  25. 25.
    Zhao WY (1999) Robust image-based 3D face recognition. Ph.D. Thesis, University of MarylandGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Department of Electrical EngineeringPrinceton UniversityPrincetonUSA

Personalised recommendations