SynapSign: An Advanced Machine Learning Framework for American Sign Language Recognition Utilizing a Novel Landmark-Based Dataset

dc.contributor.authorUysal, Erdoğan
dc.contributor.authorBalikci, İrem
dc.contributor.authorÖztimur Karada?, Özge
dc.contributor.authorYuce, Yilmaz
dc.date.accessioned2026-01-24T12:20:56Z
dc.date.available2026-01-24T12:20:56Z
dc.date.issued2024
dc.departmentAlanya Alaaddin Keykubat Üniversitesi
dc.description2024 Innovations in Intelligent Systems and Applications Conference, ASYU 2024 -- 2024-10-16 through 2024-10-18 -- Ankara -- 204562
dc.description.abstractHearing loss is a common condition affecting a significant proportion of the world's population, creating barriers to effective communication. Sign language, particularly American Sign Language (ASL), is an important tool for the social integration and personal growth of people with hearing loss. The need for effective tools to facilitate the learning and practice of ASL is increasingly recognized. Although there are many studies and proposed softwares that execute based on certain approaches such as recognition of signs by machine learning techniques with relatively high accuracy, it is apparent that there exists a need for higher performance. This paper presents SynapSign, a desktop application designed to enhance ASL learning using machine learning algorithms. To develop the application using the machine learning model having the highest accuracy, the performances of Random Forest, XGBoost and Deep Neural Network (DNN) classifiers were investigated. To this purpose, an image dataset consisting of 2600 images was prepared. For each letter of 26 letters for ASL, 100 images of hands showing 21 hand landmarks was built for accurate hand gesture recognition using Google's MediaPipe technology. Thereafter, the three classifiers were trained on this extensive dataset of ASL hand images. Acquired models were tested and their performances were compared based on accuracy, precision and recall metrics. The results reveal that the model of Random Forest classifier performs slightly higher for all three metrics with 99.6%, 99.3% and %99.7, respectively, than other models. Therefore, SynapSign was developed using this model with a user interface that allows user to input sign image from a video stream via a camera and labelled with 21 hand landmarks by MediaPipe Framework's default hand detection model. Compared to traditional methods, the application provides a more interactive and engaging learning experience, allowing users to practice and improve their ASL skills with real-time feedback. Our findings suggest that SynapSign could serve as a valuable tool for both educational and accessibility purposes, addressing the gap in resources available to ASL learners. © 2024 IEEE.
dc.description.sponsorshipIEEE SMC; IEEE Turkiye Section
dc.identifier.doi10.1109/ASYU62119.2024.10757044
dc.identifier.isbn9798350379433
dc.identifier.scopus2-s2.0-85213356503
dc.identifier.scopusqualityN/A
dc.identifier.urihttps://doi.org/10.1109/ASYU62119.2024.10757044
dc.identifier.urihttps://hdl.handle.net/20.500.12868/4681
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_Scopus_20260121
dc.subjectDeep Neural Network (DNN)
dc.subjectHand Gesture Recognition
dc.subjectHearing Impairment
dc.subjectInteraction System
dc.subjectMachine Learning
dc.subjectRandom Forest
dc.subjectSign Language
dc.subjectXGBoost
dc.titleSynapSign: An Advanced Machine Learning Framework for American Sign Language Recognition Utilizing a Novel Landmark-Based Dataset
dc.typeConference Object

Dosyalar