Depth-aware Hand Gesture Recognition for Human-Computer Interaction

Show simple item record

dc.contributor.author Mahmud, Hasan
dc.date.accessioned 2022-06-11T13:54:36Z
dc.date.available 2022-06-11T13:54:36Z
dc.date.issued 1-06-22
dc.identifier.uri http://hdl.handle.net/123456789/1490
dc.description Supervised by Prof. Dr. Md. Kamrul Hasan, Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.description.abstract Hand gestures can be dened as the movement of the hands and ngers in particular orientations to convey some meaningful information. Recently, inexpensive depth cameras have opened ample research opportunities to work with depth-based features in parallel to image-based features. Existing computer vision-based approaches have limitations in capturing depth variations present in the fine-grained gestures and also in the coarse grained. Hence, we got a scope to exploit depth information and use them in the machine learning models to distinguish those hand gestures correctly. In this thesis, we propose a unique depth quantization technique that can effectively distinguish different hand gestures. Using the technique first, we generate contrast varying depth images that can help to extract salient features from gestural images of static gestures. Second, we use depth values to capture hand nger movement information in the Z-direction to discriminate on-air writing tasks of English Capital Alphabets (ECAs). We have used depth-based features, like raw depth values, quantized depth values, and non-depth features like finger joint points in 2D, ngertip coordinates, other derived features from them, then merge these features to generate a unique dataset for testing the signicance of depth features in terms of recognition accuracy. Experiments on both static and dynamic hand gestures showed that the proposed approach gives higher recognition accuracies. Third, to test our proposed method in deep learning settings, we design a depth-aware CNN-LSTM-based deep-learning model to recognize 14 and 28 dynamic hand gestures. The model takes gray-scale varying depth images and 2D hand skeleton joint points as multimodal input. We achieve better recognition accuracies by performing feature-level and score-level fusion techniques in the benchmark dataset. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.subject Gesture Recognition; Depth Information; Static Hand Gesture, Dynamic Hand Gesture, Depth Quantization, Machine Learning en_US
dc.title Depth-aware Hand Gesture Recognition for Human-Computer Interaction en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics