Download PDFOpen PDF in browser

Camera-based Sign Language Recognition and Simultaneous Speech Generation: A Survey

EasyChair Preprint no. 4855

4 pagesDate: January 3, 2021


People regularly face problems for interpreting deaf-mute people, who primarily use sign language for communication amongst themselves and others. Despite efforts being conducted by different governments worldwide such as the provision of a sign language expert for interpreting and communicating all news to the impaired by the New Zealand media, active participation of the impaired is still at a very rudimentary stage. Further, only a few people today are proficient in communicating via sign language and hence the majority of the population at large is still devoid of any understanding of the matter. This could be problematic for deaf-mute people especially during situations of distress like pain, fraud, or other emergency situations like fire, kidnapping, etc. All these problems could be minimized substantially if this language barrier is effectively bridged. This paper aims to contemplate a number of research papers that deliver regarding the same topic. We look to understand the various ways of machine-based sign-based sign language. One should keep an eye out to understand the feasibility, methodology, and accuracies discovered in some papers

Keyphrases: Adaline neural network, Artificial Neural Network, Backpropagation, Convolutional Neural Networks, feed forward, Hilbert curve, Hough transform, Multilayer Perceptron, Random Forests, Stacked de-ionised decoders, Support Vector Machines

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Ayushi Patani and Varun Gawande and Jash Gujarathi and Vedant Puranik},
  title = {Camera-based Sign Language Recognition and Simultaneous Speech Generation: A Survey},
  howpublished = {EasyChair Preprint no. 4855},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browser