Hand Gesture Detection and Recognition Using Affine-Shift, Bag-of-Features and Extreme Learning Machine Techniques
This paper presents a real-time system for interaction with applications via hand gestures. Our system includes detection of bare hand from the video sequences by subtracting the background and including only the hand region. The system uses a machine learning approach. In the training stage the keypoints are extracted from the hand posture contour using the affine-scale invariance feature transform (ASIFT), and the keypoints are then clustered using the K-means clustering and mapped into the histogram vector (bag-of-features). Each vector is assigned a label which is treated as input to the Extreme Learning Machine (ELM) for training purpose. In the testing stage, for every frame captured using the webcam, the hand is detected, then, the keypoints are extracted from the hand segment only as described in our algorithm and fed into the cluster model to generate the vector and fed into the ELM training classifier to recognize the hand gesture.
KeywordsAffine scale invariance feature transform (ASIFT) Bag-of- Features contour Extreme Learning Machine (ELM) hand gesture hand posture K-means
Unable to display preview. Download preview PDF.
- 1.Garg, P., Naveen, A., Sangeev, S.: Vision Based Hand Gesture Recognition. World Academy of Science, Engineering and Technology 49(173), 972–977 (2009)Google Scholar
- 3.Dardas, N.H., Georganas, N.D.: Real-Time Hand Gesture Detection and Recognition Using Bag-of-Features and Support Vector Machine Techniques. IEEE Transactions on Instrumentation and Measurement 60(11) (November 2011)Google Scholar
- 4.Morel, J.-M., Yu, A.G.: Asift: A New Framework For Fully Affine Invariant Image Comparison. Image Processing On Line (2011)Google Scholar
- 5.Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005)Google Scholar
- 6.Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR (2006)Google Scholar
- 8.Naik, G.R., Kumar, D.K., Jayadeva: Twin SVM for Gesture Classification Using the Surface Electromyogram. IEEE Transactions on Information Technology in Biomedicine 14(2) (March 2010)Google Scholar
- 9.Huang, G.-B., Lan, Y., Wang, D.H.: Extreme learning machines: A survey. International Journal of Machine Learning & Cybernetics 2, 107–122 (2011), doi:10.1007/s13042-011-0019-yGoogle Scholar
- 12.Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: Theory and applications. Neurocomputing, 489–501 (2006)Google Scholar
- 13.Liu, J., Ali, S., Shah, M.: Recognizing human actions using multiple features. In: Proceedings of the International Conference on CVPR, pp. 1–8 (2008)Google Scholar
- 14.Liu, J., Luo, J., Shah, M.: Recognizing realistic actions from videos in the wild. In: Proceedings of the International Conference on CVPR (2009)Google Scholar
- 15.Minhas, R., Baradarani, A., Seifzadeh, S., Wu, Q.M.J.: Human action recognition using extreme learning machine based on visual vocabularies. Neurocomputing, 1906–1917 (2010)Google Scholar