Download PDFOpen PDF in browser

Sign Language Translator using Deep Learning

EasyChair Preprint no. 2541

4 pagesDate: February 4, 2020


People affected by speech impairment can't communicate using hearing and speech, they rely on sign language for communication.  Sign language is used among everybody who is speech impaired, but they find a hard time communicating with people which are non-signers (people aren’t proficient in sign language). So requirement of a sign language interpreter is a must for speech impaired people. This makes their informal and formal communication difficult. There has been favorable progress in the field of gesture recognition and motion recognition with current advancements in deep learning. There is also been quite a significant advancement in computer vision which would enable us to easily track the hand gestures. The proposed system tries to do a real time translation of hand gestures into equivalent English text. This system takes hand gestures as input through video and translates it text which could be understood by a non-signer. There will be use of CNN for classification of hand gestures. By deploying this system, the communication gap between signers and non-signers. This will make communication speech impaired people less cumbersome.

Keyphrases: CNN, deep learning, gesture recognition, motion recognition

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Uday Patil and Saraswati Nagtilak and Sunil Rai and Swaroop Patwari and Prajay Agarwal and Rahul Sharma},
  title = {Sign Language Translator using Deep Learning},
  howpublished = {EasyChair Preprint no. 2541},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser