dc.identifier.citation |
[1] M. S. Islam, S. S. S. Mousumi, N. A. Jessan, A. S. A. Rabby, and S. A. Hossain, “Ishara-lipi: The first complete multipurposeopen access dataset of isolated characters for bangla sign language,” in 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pp. 1–4, IEEE, 2018. [2] J. K. Chen, D. Sengupta, and R. R. Sundaram, “Cs229 project final report sign language gesture recognition with unsupervised feature learning.” [3] P. Doliotis, A. Stefan, C. McMurrough, D. Eckhard, and V. Athitsos, “Comparing gesture recognition accuracy using color and depth information,” in Proceedings of the 4th international conference on PErvasive technologies related to assistive environments, pp. 1–7, 2011. [4] B. Liao, J. Li, Z. Ju, and G. Ouyang, “Hand gesture recognition with generalized hough transform and dc-cnn using realsense,” in 2018 Eighth International Conference on Information Science and Technology (ICIST), pp. 84–90, IEEE, 2018. [5] P. Molchanov, S. Gupta, K. Kim, and J. Kautz, “Hand gesture recognition with 3d convolutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 1–7, 2015. [6] N. Pugeault and R. Bowden, “Spelling it out: Real-time asl fingerspelling recognition,” in 2011 IEEE International conference on computer vision workshops (ICCV workshops), pp. 1114–1119, IEEE, 2011. [7] K. O. Rodriguez and G. C. Chavez, “Finger spelling recognition from rgb-d information using kernel descriptor,” in 2013 XXVI Conference on Graphics, Patterns and Images, pp. 1–7, IEEE, 2013. 46 [8] W. Nai, Y. Liu, D. Rempel, and Y. Wang, “Fast hand posture classification using depth features extracted from random line segments,” Pattern Recognition, vol. 65, pp. 1–10, 2017. [9] H. Liang and J. Yuan, “Hand parsing and gesture recognition with a commodity depth camera,” in Computer Vision and Machine Learning with RGB-D Sensors, pp. 239–265, Springer, 2014. [10] L. E. Potter, J. Araullo, and L. Carter, “The leap motion controller: a view on sign language,” in Proceedings of the 25th Australian computer-human interaction conference: augmentation, application, innovation, collaboration, pp. 175–178, 2013. [11] D. Naglot and M. Kulkarni, “Real time sign language recognition using the leap motion controller,” in 2016 International Conference on Inventive Computation Technologies (ICICT), vol. 3, pp. 1–5, 2016. [12] S. Mudduluru, “Indian sign language numbers recognition using intel realsense camera,” 2017. [13] M. Mohandes, S. Aliyu, and M. Deriche, “Arabic sign language recognition using the leap motion controller,” in 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE), pp. 960–965, 2014. [14] S. Lang, M. Block, and R. Rojas, “Sign language recognition using kinect,” in International Conference on Artificial Intelligence and Soft Computing, pp. 394–402, Springer, 2012. [15] J. Huang, W. Zhou, H. Li, and W. Li, “Sign language recognition using realsense,” in 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), pp. 166–170, IEEE, 2015. [16] J. Mistry and B. Inden, “An approach to sign language translation using the intel realsense camera,” in 2018 10th Computer Science and Electronic Engineering (CEEC), pp. 219–224, IEEE, 2018. |
en_US |