Abstract:
Today, almost every individual in the society requires the internet to complete one task or the other, be it in education, professional life or social life. These individuals with diverse abilities and limitations cannot by pass the internet in certain cases. However, there are a group of people with special needs who have difficulties in efficiently accessing information on the web due to some limitations in using the standard methods of interacting with computers. Because of their limitations, alternative interaction techniques are usually designed purposely to address these shortcomings. In an effort to make access to the World Wide Web all-inclusive irrespective of people’s abilities, we have developed a multimodal input interfaces that combines head movements and voice commands. This interface acts as an alternative to the standard mouse especially for people with upper limb motor disabilities. We designed, developed and evaluated our multimodal interface. We built predictive models for our system to act as bases for designers. We constructed a Fitts’ law model to predict the pointing time of our system. We equally built a KLM model for the whole system to be used to predict the time needed to complete a task using this interface. With our initial interest being giving easy access to information on the web to people with disabilities, we tested our interface on two of the most common web browsing activities today, namely e-commerce and social networking. We compared the performance of our system in these web navigation scenarios to the performance of the standard mouse and the voice-only navigation. Our system performed better than the voice-only navigation and it equally addressed some of the limitations of voice-only interaction such as ambiguity and long list of unstructured command vocabulary.