Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Language

  • Ghadeer Written by
  • Update: 29/12/2022

Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Languageenergy

Jeyalakshmi Chelliah

Department of ECE, K.Ramakrishnan College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

KiranBala Benny

Department of Artificial Intelligence and Data Science, K.Ramakrishnan College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Revathi Arunachalam

School of EEE, Sastra Deemed to be University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Viswanathan Balasubramanian

Department of ECE, K.Ramakrishnan College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Abstract: Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.

Keywords: Speaker recognition, voice impaired, energy, deep learning based convolutional neural network, mel frequency cepstral coefficient, Auto associative neural network, back propagation algorithm.

Received November 26, 2020; accepted December 26, 2021

https://doi.org/10.34028/iajit/20/1/11

Full text

Read 714 times Last modified on Monday, 02 January 2023 07:00
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…