A Deep Learning Approach for Hand Posture Recognition from Depth Data

  • Thomas Kopinski
  • Fabian Sachara
  • Alexander Gepperth
  • Uwe Handmann
Conference paper

DOI: 10.1007/978-3-319-44781-0_22

Part of the Lecture Notes in Computer Science book series (LNCS, volume 9887)
Cite this paper as:
Kopinski T., Sachara F., Gepperth A., Handmann U. (2016) A Deep Learning Approach for Hand Posture Recognition from Depth Data. In: Villa A., Masulli P., Pons Rivero A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2016. ICANN 2016. Lecture Notes in Computer Science, vol 9887. Springer, Cham

Abstract

Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.

Keywords

Deep learning Hand posture recognition 3D data 

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Thomas Kopinski
    • 1
    • 2
  • Fabian Sachara
    • 1
    • 2
  • Alexander Gepperth
    • 1
    • 2
  • Uwe Handmann
    • 1
    • 2
  1. 1.Hochschule Ruhr West, Computer Science InstituteBottropGermany
  2. 2.UIIS Lab and FLOWERS Team, Inria, Université Paris-SaclayPalaiseauFrance

Personalised recommendations