Hybrid FiST_CNN Approach for Feature Extraction for Vision-Based Indian Sign Language Recognition
Akansha Tyagi Department of Computer Science and Engineering, Maharishi Markandeshwar(Deemed to be) University, IndiaThis email address is being protected from spambots. You need JavaScript enabled to view it. |
Sandhya Bansal Department of Computer Science and Engineering, Maharishi Markandeshwar,(Deemed to be) University, IndiaThis email address is being protected from spambots. You need JavaScript enabled to view it. |
Abstract: Indian Sign Language (ISL) is the commonly used language by the deaf-mute community in the Indian continent. Effective feature extraction is essential for the automatic recognition of gestures. This paper aims at developing an efficient feature extraction technique using Features from Fast Accelerated Segment Test (FAST), Scale-Invariant Feature Transformation (SIFT), and Convolution Neural Networks (CNN). FAST with SIFT are used to detect and compute features, respectively. CNN is used for classification with the hybridization of FAST-SIFT features. The system is implemented and tested using the python-based library Keras. The results of the proposed techniques have been tested on 34 gestures of ISL (24 alphabets set and 10 digit sets) and then compared with the CNN and SIFT_CNN, and it is also tested on two publicly available datasets on Jochen Trisech Dataset (JTD) and NUS-II dataset. The proposed study outperformed some existing ISLR works with an accuracy of 97.89%, 95.68%, 94.90% and 95.87% for ISL-alphabets, MNIST, JTD and NUS-II, respectively.
Keywords: Sign language, indian sign language, fast accelerated segment test, scale-invariant feature transformation, convolution neural networks.
Received September 9, 2020; accepted March 10, 2021