Hand part classification using single depth images

Myoung Kyu Sohn, Dong Ju Kim, Hyunduk Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Hand pose recognition has received increasing attention as an area of HCI. Recently with the spreading of many low cost 3D camera, researches for understanding more natural gestures have been studied. In this paper we present a method for hand part classification and joint estimation from a single depth image. We apply random decision forests (RDF) for hand part classification. Foreground pixels in the hand image are estimated by RDF, which is called per-pixel classification. Then hand joints are estimated based on the classified hand parts.We suggest robust feature extraction method for per-pixel classification, which enhances the accuracy of hand part classification. Depth images and label images synthesized by 3D hand mesh model are used for algorithm verification. Finally we apply our algorithm to the real depth image from conventional 3D camera and show the experiment result.

Original languageEnglish
Title of host publicationComputer Vision - ACCV 2014 Workshops, Revised Selected Papers
EditorsC.V. Jawahar, Shiguang Shan
PublisherSpringer Verlag
Pages253-261
Number of pages9
ISBN (Print)9783319166308
DOIs
StatePublished - 2015
Event12th Asian Conference on Computer Vision, ACCV 2014 - Singapore, Singapore
Duration: 1 Nov 20142 Nov 2014

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9009
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference12th Asian Conference on Computer Vision, ACCV 2014
Country/TerritorySingapore
CitySingapore
Period1/11/142/11/14

Bibliographical note

Publisher Copyright:
© Springer International Publishing Switzerland 2015.

Fingerprint

Dive into the research topics of 'Hand part classification using single depth images'. Together they form a unique fingerprint.

Cite this