Among all the sign languages, the American variety is most widely used. Despite its widespread use, there was no software that would translate it into English.
However, an engineering student from India recently came up with a revolutionary idea.
A report by the Times of India revealed that Priyanjali Gupta, a junior at India’s Vellore Institute of Technology has come up with the model.
She credited Sir Nicholas Renotte’s video on Real-Time Sign Language Detection for being the inspiration behind the development of the AI-powered model. The AI model developed by her translates hand gestures of sign language from a pre-trained model.
“The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames, for which I’m likely to use Long short-term memory (LSTM) networks,” Priyanjali was quoted as saying by TOI.
"It is really very challenging to build a deep learning model dedicated to sign language detection. But I hope the open-source community will come up with a solution very soon."
Priyanjali is not the only developer in this field. In 2016, two students from America's University of Washington invented gloves that could be instrumental in translating sign language to speech or text.
The students, Navid Azodi and Thomas Pryor achieved fame within the scientific community and also won the Lemelson-MIT competition for their invention.
rassiq.aziz@gmail.com