Abstract
In this article, we propose a novel multimodal data analytics scheme for human activity recognition. Traditional data analysis schemes for activity recognition using heterogeneous sensor network setups for eHealth application scenarios are usually a heuristic process, involving underlying domain knowledge. Relying on such explicit knowledge is problematic when aiming to create automatic, unsupervised or semi-supervised monitoring and tracking of different activities, and detection of abnormal events. Experiments on a publicly available OPPORTUNITY activity recognition database from UCI machine learning repository demonstrates the potential of our approach to address next generation unsupervised automatic classification and detection approaches for remote activity recognition for novel, eHealth application scenarios, such as monitoring and tracking of elderly, disabled and those with special needs.
Chapter PDF
References
Sagha, H., Digumarti, S.T., José del, R.M., Chavarriaga, R., Calatroni, A., Roggen, D., Tröster, G.: Benchmarking classification techniques using the Opportunity human activity dataset. In: IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA, October 9-12 (2011)
Huang, L.: Person Recognition By Feature Fusion. Dept. of Engineering Technology Metropolitan State College of Denver, IEEE, Denver, USA (2011)
Jain, A.K.: Next Generation Biometrics, Department of Computer Science & Engineering. Michigan State University, Department of Brain & Cognitive Engineering, Korea University (2009)
Yampolskiy, R.V., Govindaraja, V.: Taxonomy of Behavioral Biometrics. Behavioral Biometrics for Human Identification, 1–43 (2010)
Meraoumia, A., Chitroub, S., Bouridane, A.: Fusion of Finger-Knuckle-Print and Palmprint for an Efficient Multi-biometric System of Person Recognition. In: IEEE Communications Society subject matter experts for publication in the IEEE ICC (2011)
Ross, A., Jain, A.K.: Information fusion in biometrics. Pattern Recognition Letters 24, 2115–2125 (2003)
Chang, K., et al.: Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics. IEEE Trans. PAMI 25, 1160–1165 (2003)
Kittler, J., et al.: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20, 226–239 (1998)
Hossain, E., Chetty, G.: Multimodal Identity Verification Based on Learning Face and Gait Cues. In: Lu, B.-L., Zhang, L., Kwok, J. (eds.) ICONIP 2011, Part III. LNCS, vol. 7064, pp. 1–8. Springer, Heidelberg (2011)
Multilayer Perceptron Neural Networks, The Multilayer Perceptron Neural Network Model, http://www.dtreg.com
Hinton, G.E.: To recognize shapes, first learn to generate images. Progress in Brain Research 165, 535–547 (2007)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural computation 18(7), 1527–1554 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 IFIP International Federation for Information Processing
About this paper
Cite this paper
Chetty, G., Yamin, M. (2014). A Novel Multimodal Data Analytic Scheme for Human Activity Recognition. In: Liu, K., Gulliver, S.R., Li, W., Yu, C. (eds) Service Science and Knowledge Innovation. ICISO 2014. IFIP Advances in Information and Communication Technology, vol 426. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55355-4_47
Download citation
DOI: https://doi.org/10.1007/978-3-642-55355-4_47
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-55354-7
Online ISBN: 978-3-642-55355-4
eBook Packages: Computer ScienceComputer Science (R0)