An Effective Navigation System Combining both Object Detection and Obstacle Detection Based on Depth Information for the Visually Impaired

Show simple item record

dc.contributor.author Raihan, Md. Nishat
dc.contributor.author Seym, Hossain Mohammad
dc.date.accessioned 2020-10-19T17:10:05Z
dc.date.available 2020-10-19T17:10:05Z
dc.date.issued 2018-11-15
dc.identifier.citation [1] Intel RealSense Camera D435, Intel. https://click.intel.com/intelr- realsensetm-depth-camera-d435.html [2] Intel Up Board, Intel. https://www.up-board.org/ [3] Pradeep, V.; Medioni, G.; Weiland, J. Robot Vision for the Visually Impaired. In Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 15–22. [4] Alcantarilla, P.F.; Yebes, J.J.; Almazan, J.; Bergasa, L.M. On Combin- ing Visual SLAM and Dense Scene Flow to Increase the Robustness of Localization and Mapping in Dynamic Environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1290–1297. [5] Dakopoulos, D.; Bourbakis, N.G. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Trans. Syst. Man Cybern. C 2010, 40, 25–35. [6] Saez, J.M.; Escolano, F.; Penalver, A. First Steps towards Stereo- Based SLAM for the Visually Impaired. In Proceedings of IEEE Com- puter Society Conference on Computer Vision and Pattern Recognition- Workshops, San Diego, CA, USA, 25 June 2005; p. 23. [7] Saez, J.M.; Escolano, F. Stereo-Based Aerial Obstacle Detection for the Visually Impaired. In Proceedings of Workshop on Computer Vision Applications for the Visually Impaired, Marselle, France, 18 October 2008. [8] Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classifi- cation with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). [9] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition(pp. 1-9). [10] Redmon, J., Divvala, S., Girshick, R. and Farhadi, A., 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779- 788). [11] Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587). [12] Girshick, R., 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448). [13] Ren, S., He, K., Girshick, R. and Sun, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99). [14] Wahab, M.H.A., Talib, A.A., Kadir, H.A., Johari, A., Noraziah, A., Sidek, R.M. and Mutalib, A.A., 2011. Smart cane: Assistive cane for visually-impaired people. arXiv preprint arXiv:1110.5156. [15] Al-Fahoum, A.S., Al-Hmoud, H.B. and Al-Fraihat, A.A., 2013. A smart infrared microcontroller-based blind guidance system. Active and Pas- sive Electronic Components, 2013. [16] Chen, X. and Yuille, A.L., 2005, June. A time-efficient cascade for real-time object detection: With applications for the visually impaired. In Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on (pp. 28-28). IEEE. [17] Tian, Y., Yang, X. and Arditi, A., 2010, July. Computer vision-based door detection for accessibility of unfamiliar environments to blind per- sons. In International Conference on Computers for Handicapped Per- sons (pp. 263-270). Springer, Berlin, Heidelberg. [18] Winlock, T., Christiansen, E. and Belongie, S., 2010, June. Toward real- time grocery detection for the visually impaired. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on (pp. 49-56). IEEE. [19] Guerrero, L.A., Vasquez, F. and Ochoa, S.F., 2012. An indoor navigation system for the visually impaired. Sensors, 12(6), pp.8236-8258. [20] Rodr´ıguez, A., Yebes, J.J., Alcantarilla, P.F., Bergasa, L.M., Almaza´n, J. and Cela, A., 2012. Assisting the visually impaired: obstacle detection and warning system by acoustic feedback. Sensors, 12(12), pp.17476- 17496. [21] Tapu, R., Mocanu, B., Bursuc, A. and Zaharia, T., 2013. A smartphone- based obstacle detection and classification system for assisting visually impaired people. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 444-451). [22]B. Lucas, T. Kanade, An iterative technique of image registration and its application to stereo, ICAI, 1981. [23] Aladren, A., L´opez-Nicol´as, G., Puig, L. and Guerrero, J.J., 2016. Nav- igation assistance for the visually impaired using RGB-D sensor with range expansion. IEEE Systems Journal, 10(3), pp.922-932. [24] Jabnoun, H., Benzarti, F. and Amiri, H., 2015, December. Object de- tection and identification for blind people in video scene. In Intelligent Systems Design and Applications (ISDA), 2015 15th International Con- ference on (pp. 363-367). IEEE. [25] Wang, H.C., Katzschmann, R.K., Teng, S., Araki, B., Giarr´e, L. and Rus, D., 2017, May. Enabling independent navigation for visually im- paired people through a wearable vision-based feedback system. In 2017 IEEE international conference on robotics and automation (ICRA) (pp. 6533-6540). IEEE. [26] Python Software Foundation, Open Source, Text-to-Speech library for cross platforms, https://pypi.org/project/pyttsx3/ [27] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P. and Zitnick, C.L., 2014, September. Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740- 755). Springer, Cham. [28] Dakopoulos, D. and Bourbakis, N.G., 2010. Wearable obstacle avoidance electronic travel aids for blind: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1), pp.25- 35. [29]T. Ifukube, T. Sasaki, and C. Peng, “A blind mobility aid modeled after echolocation of bats,” IEEE Trans. Biomed. Eng., vol. 38, no. 5, pp. 461– 465, May 1991 [30]S. Shoval, J. Borenstein, and Y. Koren, “Mobile robot obstacle avoidance in a computerized travel aid for the blind,” in Proc. 1994 IEEE Robot. Autom. Conf., San Diego, CA, May 8–13, pp. 2023–2029 [31]P. B. L. Meijer. (1992, Feb.). An experimental system for auditory image representations. IEEE Trans. Biomed. Eng. [Online]. 39(2), pp. 112–121. Available: http://www.seeingwithsound.com/ [32]A. Hub, J. Diepstraten, and T. Ertl, “Design and development of an indoor navigation and object identification system for the blind,” in Proc. ACM SIGACCESS Accessibility Computing, Sep. 2003/Jan. 2004, no. 77–78, pp. 147–152 [33]D. Aguerrevere, M. Choudhury, and A. Barreto, “Portable 3D sound / sonar navigation system for blind individuals,” presented at the 2nd LACCEI Int. Latin Amer. Caribbean Conf. Eng. Technol. Miami, FL, Jun. 2–4 2004. [34]J. L. Gonzalez-Mora, A. Rodr ´ ´ıguez-Hernandez, L. F. Rodr ´ ´ıguez-Ramos, L. D´ıaz-Saco, and N. Sosa. (2009, May 8). Develop- ment of a new space perception system for blind people, based on the creation of a virtual acoustic space. Tech. Rep. [Online]. Available: http://www.iac.es/proyect/eavi [35]G. Sainarayanan, R. Nagarajan, and S. Yaacob, “Fuzzy image process- ing scheme for autonomous navigation of human blind,” Appl. Softw. Comput., vol. 7, no. 1, pp. 257–264, Jan. 2007 [36]R. Audette, J. Balthazaar, C. Dunk, and J. Zelek, “A stereo-vision system for the visually impaired,” Sch. Eng., Univ. Guelph, Guelph, ON, Canada, Tech. Rep. 2000-41x-1, 2000 [37]I. Ulrich and J. Borenstein, “The guidecane – applying mobile robot technologies to assist the visually impaired people,” IEEE Trans. Syst., Man Cybern., A: Syst. Hum., vol. 31, no. 2, pp. 131–136, Mar. 2001. [38]S. Meers and K. Ward, “A substitute vision system for providing 3D per- ception and GPS navigation via electro-tactile stimulation,” presented at the 1st Int. Conf. Sens. Technol., Palmerston North, New Zealand, Nov. 21–23, 2005 [39]K. Ito, M. Okamoto, J. Akita, T. Ono, I. Gyobu, T. Tagaki, T. Hoshi, and Y. Mishima, “CyARM: An alternative aid device for blind persons,” in Proc. CHI05, Portland, OR, Apr. 2–7, 2005, pp. 1483–1488. [40]M. Bouzit, A. Chaibi, K. J. De Laurentis, and C. Mavroidis, “Tactile feedback navigation handle for the visually impaired,” presented at the [41]2004 ASME Int. Mech. Eng. Congr. RDD Expo., Anaheim, CA, Nov. 13–19 [42]C. Shah, M. Bouzit, M. Youssef, and L. Vasquez, “Evaluation of RU- netra – tactile feedback navigation system for the visually impaired,” in Proc. Int. Workshop Virtual Rehabil., New York, 2006, pp. 71–77. [43]L. A. Johnson and C. M. Higgins, “A navigation aid for the blind using tactile-visual sensory substitution,” in Proc. 28th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., New York, 2006, pp. 6298–6292 [44]S. Cardin, D. Thalmann, and F. Vexo, “A wearable system for mobility improvement of visually impaired people,” Vis. Comput., vol. 23, no. 2, pp. 109–118, Jan. 2007 [45]N. G. Bourbakis and D. Kavraki, “Intelligent assistants for handicapped people’s independence: Case study,” in Proc. 1996 IEEE Int. Joint Symp. Intell. Syst., Nov. 4–5, pp. 337–344. [46]D. Dakopoulos, S. K. Boddhu, and N. Bourbakis, “A 2D vibration array as an assistive device for visually impaired,” in Proc. 7th IEEE Int. Conf. Bioinf. Bioeng., Boston, MA, Oct. 14–17, 2007, vol. 1, pp. 930–937. [47]D. Dakopoulos and N. Bourbakis, “Preserving visual information in low resolution images during navigation for visually impaired,” presented at the 1st Int. Conf. PErvasive Technol. Related Assist. Environ., Athens, Greece, Jul. 15–19, 2008. [48]M. Adjouadi, “A man-machine vision interface for sensing the environ- ment,” J. Rehabil. Res. Develop., vol. 29, no. 2, pp. 57–56, 1992. [49]D. Yuan and R. Manduchi, “A tool for range sensing and environment discovery for the blind,” in Proc. 2004 Conf. Comput. Vis. Pattern Recogn. Workshop, Washington, DC, Jun. 27–Jul. 02, vol. 3, pp. 39–39. en_US
dc.identifier.uri http://hdl.handle.net/123456789/547
dc.description Supervised by Mr. Rafsanjany Kushol en_US
dc.description.abstract Object detection remains one of the most researched areas in the field of Digital Image Processing. With the introduction of Convolutional Neural Network (CNN), there has been a revolution in the detection approaches. Although the detection algorithms have come a long way, detecting objects for the blind or visually impaired people (BVI) is a completely different sce- nario. Rather than the detection of objects, for the visually challenged people this task is primarily focused on obstacle detection. Based on this concept, several approaches have been made to design smart canes that can be used as a helpful walking tool. More robust approaches include real time imag- ing through camera devices and processing the images to detect objects or obstacles. It remains a challenge to ensure both sufficient performance and cost efficiency at the same time. In many cases, the design architecture is not convenient enough for the visually handicapped persons. Also very few attempts were made to combine depth information with an object detection method in real time. In this paper, we propose a completely new system framework that per- forms detection for the visually challenged people. We use a depth sense camera and a portable computing device to analyze the depth data and combine with the detection method to detect objects and also obstacles in real time along with its relative position and also the distance from the user. We perform the object detection using YOLO (You Only Look Once) algo- rithm which is comparatively faster than almost any recent object detection algorithm. Even if an object is not detected by YOLO due to lack of light or any other cause, the depth information will allow us the detection of obstacle and also the position and distance can still be calculated. Finally the total information gathered in real time will be narrated with convenience to the subject. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering, Islamic University of Technology, Board Bazar, Gazipur, Bangladesh en_US
dc.title An Effective Navigation System Combining both Object Detection and Obstacle Detection Based on Depth Information for the Visually Impaired en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics