DEEP LEARNING MODEL FOR SIGN LANGUAGE INTERPRETATION USING WEB CAMERA
European Journal of Molecular & Clinical Medicine,
2020, Volume 7, Issue 8, Pages 5467-5475
AbstractSign languages are languages that solely utilize gestures to convey meaning. Communication, based on the sign language is a mix of manual explanations and non-manual elements. Sign language recognition framework positively reflects communication between the person who is hard of hearing and world around. It also helps in communicating with machines. One of the most utilized types of gesture based communication is the American sign language (ASL). In the proposed work, the letters are detected from a video frame using convolutional neural network (CNN) and then converted into speech using Google Text-to-Speech (gTTS). The systems are trained with 75% of images and tested with 25% of images from the database.
- Article View: 246
- PDF Download: 274