Human Action Recognition using 3D Skeletal Joint Positions

Show simple item record

dc.contributor.author Ahmed, Ferdous
dc.contributor.author Tariq, Abdullah-Al-
dc.date.accessioned 2021-09-13T09:22:39Z
dc.date.available 2021-09-13T09:22:39Z
dc.date.issued 2014-11-15
dc.identifier.citation [1] J. K. Aggarwal and M. S. Ryoo, “Human activity analysis: A review,” ACM Computing Surveys (CSUR), vol. 43, no. 3, pp. Article 16, 43 Pages, 2011. [2] P. Turaga, R. Chellapa, V. S. Subrahmanian and O. & Udrea, “Machine recognition of human activities: A survey,” Transactions on Circuits and Systems for Video Technology, vol. 18, no. 11, p. 1473–1488, 2008. [3] A. Bobick and J. & Davis, “The recognition of human movement using temporal templates,” Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 23, no. 3, pp. 257-267, 2001. [4] H. Meng, N. Pears and C. & Bailey, “A human action recognition system for embedded computer vision application.,” in Computer Vision and Pattern Recognition (CVPR), vol. 2, no. 1, pp. 21-22,2007. [5] J. W. Davis and A. & Tyagi, “Minimal-latency human action,” Image and Vision Computing (IVC), vol. 24, no. 5, pp. 455-473, 2006.. [6] V. Kellokumpu, M. Pietikainen and J. & Heikkila, “Human activity recognition using sequences of postures,” in International Association of Pattern Recognition (IAPR), 2005. [7] M. D. Rodriguez and J. S. M. Ahmed, “A spatio-temporal maximum average correlation height filter for action recognition,” in Computer Vision and Pattern Recognition (CVPR), 2008. [8] T. T. Thanh, T. Thang, FanChen, K. Kotani and B. Le, “Extraction of Discriminative Patterns from Skeleton Sequences for Accurate Action Recognition,” Fundamenta Informaticae - Computing and Communication Technologies, vol. 130, no. 2, pp. 247-261, 2014. [9] “The Teardown,” Engineering & Technology, vol. 6, no. 3, pp. 94-95, 2011. [10] P. Huang and S. J. Hilton A., “Shape Similarity for 3D Video Sequences of People,” In International Journal of Computer Vision (IJCV) special issue on 3D Object Retrieval, vol. 89, no. 2-3, pp.362-381, 2010. [11] Y. Sheikh and M. S. M. Sheikh, “Exploring the space of a human action,” In IEEE International Conference on Computer Vision (ICCV), vol. 1, p. 144–149., 2005. [12] L. Xia, “UTKinect-Action Dataset,” [Online]. Available: http://cvrc.ece.utexas.edu/KinectDatasets/HOJ3D.html. [13] C. Sinthanayothin and W. B. Nonlapas Wongwaen, “Skeleton Tracking using Kinect Sensor & Displaying in 3D Virtual Scene,” International Journal of Advancements in Computing Technology (IJACT), vol. 4, no. 11, pp. 213-223, 2012. [14] J. K. Aggarwal, L. Xia and C. C. Chen, “View Invariant Human Action Recognition Using Histograms of 3D Joints,” in CVPRW, pp. 257-267, 2012. 36 [15] A. Baak, M. Müeller and H.-P. Seide, “An Efficient Algorithm for Key frame-based Motion Retrieval in the Presence of Temporal Deformations,” in MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval, vol. 23, no. 3, pp. 257-267, 2008. [16] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman and A. & Blake, “Real-Time Human Pose Recognition in Parts from a Single Depth Image,” in Computer Vision And Pattern Recognition (CVPR), pp. 57-60,2011. [17] M. Z. Uddin, N. Duc Thang and T.-S. & Kim, “Human Activity Recognition Using Body Joint-Angle Features and Hidden Markov Model,” Electronics and Telecommunications Research Institute (ETRI), vol. 33, no. 4, pp. 569-579, 2011. [18] J. Yamato, J. Ohya and K. Ishii, “Recognizing human action in time-sequential images using hidden Markov model,” in Computer Vision and Pattern Recognition (CVPR), pp. 40-45, 1992. [19] M. Ankerst, G. Kastenmüller, H.-P. Kriegel and T. Seidl, “3D Shape Histograms for Similarity Search and Classification in Spatial Databases,” Lecture Notes in Computer Science, vol. 1651, pp. 207-226, 1999. [20] R. Lublinerman, N. Ozay, D. Zarpalas and O. Camps, “Activity recognition from sil- houettes using linear systems and model (in) validation techniques.,” in In International Conference on Pattern Recognition (ICPR), 2006. [21] S. Savarese, A. DelPozo, J. Niebles and L. Fei-Fei, “Spatial-temporal correlatons for unsupervised action classification.,” in Workshop on Motion and Video Computing (WMVC)., pp. 117-121, 2008. [22] L. Xia, C.-C. Chen and J. K. Aggarwal, “Human Detection Using Depth Information by Kinect,” in Human Activity Understanding from 3D Data in conjunction with CVPR (HAU3D), vol. 12, no. 2, pp. 557-567, Colorado, 2011. [23] X. Yang and Y. Tian, “Effective 3D Action Recognition Using Eigen Joints,” Journal of Visual Communication and Image Representation, vol. 25, no. 1, pp. 2-11, 2014. [24] W. Li, Z. Zhang and Z. Liu, “Action recognition based on a bag of 3D points,” in CVPRW, 2010. en_US
dc.identifier.uri http://hdl.handle.net/123456789/985
dc.description Supervised by Md. Hasanul Kabir, Ph.D. Associate Professor, Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Board Bazar, Gazipur-1704, Bangladesh. en_US
dc.description.abstract Human Action Recognition is one of the intriguing research area of modern Artificial Intelligence and Computer Vision. Different researchers have proposed different methods to enable machines with the capability of recognizing human actions. One of the most traversed approaches is to use 3D depth image to acknowledge human actions. Another approach is to consider human silhouettes to predict the human actions. In this thesis we introduce a novel method to extract key frames for recognizing human actions where we use the human actions using the help of 3D skeletal joint locations. The key frames are selected depending on the distance from one frame to its neighbours and selecting a fixed number of frames out of any arbitrary number of frames. We use Microsoft Kinect to extract the joint locations where any human’s twenty joint locations are provided in 3D Cartesian coordinate system. Thought there are some errors in Microsoft Kinect’s joint location extraction, we consider the locations to be accurate and our research starts from that assumption. Here we introduce a new feature representation by combining histogram of joint 3D (HOJ3D) and static posture feature of 3D skeletal joint locations. By combining two representation we try to overcome their corresponding disadvantages. HOJ3D fails to represent how individual joints changes their corresponding locations with respect to other joints. Static posture feature fails to represent how these joints are distributed. Then we used Hidden Markov Model (HMM) to recognize human actions. We perform an extensive set of experiments and compare our method with some of the existing method in the field by using publicly available datasets. The evaluation method followed is n-fold validation and the results show that our method is more accurate and robust consuming less time while generating key frames. Performances generated by different number of key frames and hidden states for Hidden Markov Models are compared to show the output measure of our proposed system. The method can be used in real time to recognize human actions and can be deployed for security, augmented reality and other computer vision oriented purposes. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.title Human Action Recognition using 3D Skeletal Joint Positions en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics