Web Interaction Model with Multimodal Interface using Speech and Head Gestures

Show simple item record

dc.contributor.author Aboubakar, Mountapmbeme
dc.date.accessioned 2020-09-23T09:43:05Z
dc.date.available 2020-09-23T09:43:05Z
dc.date.issued 2018-11-15
dc.identifier.citation [1] Alexander Gruenstein, Ian McGraw, Ibrahim Badr, “The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces,” in Proceedings of the 10th international conference on Multimodal interfaces, Chania, Crete, 2008. [2] Kevin Christian, Bill Kules, Ben Shneiderman, Adel Youssef, “A comparison of voice controlled and mouse controlled web browsing,” in Proceedings of the fourth international ACM conference on Assistive technologies, Arlington, Virginia, 2000. [3] Pourang Irani, Sharon Oviatt, Matthew Aylett, Gerald Penn, Shimei Pan, Nikhil Sharma, Frank Rudzicz, Randy Gomez, Ben Cowan, Keisuke Nakamura, “Designing Speech, Acoustic and Multimodal Interactions,” in Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems , Denver, Colorado,, 2017. [4] Lee, Vicki L. HansonJohn T. RichardsChin Chin, “Web Access for Older Adults: Voice Browsing?,” in International Conference on Universal Access in Human-Computer Interaction, Verlag Berlin Heildelberg, 2007. [5] S. L. Hura, “Usability Testing of Spoken Conversational Systems,” Journal of Usability Studies, vol. 12, no. 4, pp. 155-163, 2017. [6] Rajeev Agarwal, Yeshwant Muthusamy, Vishu Viswanathan, “Voice Browsing the Web for Information Access,” 1998. [Online]. Available: https://www.w3.org/Voice/1998/Workshop/RajeevAgarwal.html. [7] Yael Dubinsky, Tiziana Catarci, Stephen Kimani, “A User-Based Method for Speech Interface Development,” in In: Stephanidis C. (eds) Universal Acess in Human Computer Interaction. Coping with Diversity. UAHCI 2007. Lecture Notes in Computer Science, Berlin, Springer, 2007, pp. 355-364. [8] Karl Lewis, Pettey Micheal, Shneiderman Ben, “Speech Activated versus Mouse-Activated Commands for Word Processing Applications: An Empirical Evaluation,” International Journal of Man-Machine Studies, pp. 667-687, 1993. [9] S. Oviatt, “Interface techniques for minimizing disfluent input to spoken language systems,” in CHI '94 Conference Companion on Human Factors in Computing Systems, Boston, 1994. [10] E. Protalinski, “VentureBeat.com,” 17 05 2017. [Online]. Available: https://venturebeat.com/2017/05/17/googles-speech-recognition-technology-now-has-a-4-9-word-error-rate/. [Accessed 11 2017]. [11] Jhilmil Jain, Arnold Lund, Dennis Wixon, “The future of natural user interfaces,” in CHI EA '11 CHI '11 Extended Abstracts on Human Factors in Computing Systems , Vancouver, 2011. [12] Andreia Sias Rodrigues, Vinicius Kruger da Costa, Rafael Cunha Cardoso, Marcio Bender Machado, Marcelo Bender Machado, Tatiana Aires Tavares, “Evaluation of a Head-Tracking Pointing Device for Users with Motor Disabilities,” in Proceedings of the 10th International 61 Conference on PErvasive Technologies Related to Assistive Environments, Island of Rhodes, 2017. [13] Patle Pooja, Waigaonkar Snehal, Patil Jyoti, Anjankar Piyush, “A Camera Mouse- Applicaiton for Disable Person: A review,” International Journal of Engineering Science and Computing, vol. 7, no. 9, 2017. [14] João M.S.Martins, João M.F.Rodrigues, Jaime A.C.Martins, “Low-cost natural interface based on head movements,” in International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Infoexclusion (DSAI 2015), 2015. [15] Matthew R. Williams, Robert F. Kirsch, “Evaluation of Head Orientation and Neck Muscle EMG Signals as Command Inputs to a Human-Computer Interface for Individuals With High Tetraplegia,” IEEE Transaction on Neural Systems and Rehabilitation Engineering, vol. 16, no. 5, 2008. [16] Amer Al-rahayfeh, Miad Faezipour, “Eye Tracking and Head Movement Detection: A State-of-Art Survey,” IEEE journal of translationall engineering in health and medicine, vol. 1, 2013. [17] P.C.Anjankar, S.A.Waigaonkar, P.D.Patle, J.D.Patil, “A Camera Mouse - An Application for Disable Person,” International Journal of Computer Sciences and Engineering, vol. 6, no. 3, pp. 133-137, 2018. [18] Tunhua Wu, Ping Wang, Yezhi Lin, “A Robust Noninvasive Eye Control Approach For Disabled People Based on Kinect 2.0 Sensor,” IEEE Sensors Letters, vol. 2, no. 3, 2017. [19] I. Scott MacKenzie, Abigail Sellen, William A. S. Buxton, “A comparison of input devices in element pointing and dragging tasks,” in SIGCHI Conference on Human Factors in Computing Systems, New Orleans, 1991. [20] I. S. Mackenzie, Human-Computer Interaction An Empirical Research Perspective, Elsevier, 2013. [21] Behnaz Yousefi, Xueliang Huo, Maysam Ghovanloo, “Using Fitts’s Law for Evaluating Tongue Drive System as a Pointing Device for Computer Access,” in Annual International Conference of the IEEE in Medicine and Biology Society, 2010. [22] Jing Kong, Xiangshi Ren, “Calculation of Effective Target Width and Its Effects on Pointing Tasks,” IPSJ Journal, vol. 45, no. 5, pp. 1570-1572, 2006. [23] João M.S.Martins, João M.F.Rodrigues, Jaime A.C.Martins, “Low-cost Natural Interface Based on Head Movements,” in Proceedings of the 6th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, 2015. en_US
dc.identifier.uri http://hdl.handle.net/123456789/351
dc.description Supervised by Prof. Dr. Md. Kamrul Hasan en_US
dc.description.abstract Today, almost every individual in the society requires the internet to complete one task or the other, be it in education, professional life or social life. These individuals with diverse abilities and limitations cannot by pass the internet in certain cases. However, there are a group of people with special needs who have difficulties in efficiently accessing information on the web due to some limitations in using the standard methods of interacting with computers. Because of their limitations, alternative interaction techniques are usually designed purposely to address these shortcomings. In an effort to make access to the World Wide Web all-inclusive irrespective of people’s abilities, we have developed a multimodal input interfaces that combines head movements and voice commands. This interface acts as an alternative to the standard mouse especially for people with upper limb motor disabilities. We designed, developed and evaluated our multimodal interface. We built predictive models for our system to act as bases for designers. We constructed a Fitts’ law model to predict the pointing time of our system. We equally built a KLM model for the whole system to be used to predict the time needed to complete a task using this interface. With our initial interest being giving easy access to information on the web to people with disabilities, we tested our interface on two of the most common web browsing activities today, namely e-commerce and social networking. We compared the performance of our system in these web navigation scenarios to the performance of the standard mouse and the voice-only navigation. Our system performed better than the voice-only navigation and it equally addressed some of the limitations of voice-only interaction such as ambiguity and long list of unstructured command vocabulary. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Bangladesh en_US
dc.subject Human Computer Interaction (HCI), Interaction techniques, Multimodal Interaction, Speech Interfaces, Head Tracking en_US
dc.title Web Interaction Model with Multimodal Interface using Speech and Head Gestures en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics