| تعداد نشریات | 20 |
| تعداد شمارهها | 1,234 |
| تعداد مقالات | 11,242 |
| تعداد مشاهده مقاله | 78,614,636 |
| تعداد دریافت فایل اصل مقاله | 107,398,775 |
Dimensionality Reduction for Offline Alphabet Arabic Sign Language Recognition using Deep Learning | ||
| Journal of Biomedical Physics and Engineering | ||
| مقالات آماده انتشار، اصلاح شده برای چاپ، انتشار آنلاین از تاریخ 21 آبان 1404 اصل مقاله (1.31 M) | ||
| نوع مقاله: Original Research | ||
| نویسندگان | ||
| Sari Awwad* 1؛ Subhieh M. El-Salhi2؛ Bashar Igried1 | ||
| 1Department of Computer Science and Applications, Faculty of Prince Al-Hussein Bin Abdullah II for Information Technology, The Hashemite University, Zarqa, Jordan | ||
| 2Department of Computer Information Systems, Faculty of Prince Al-Hussein Bin Abdullah II for Information Technology, The Hashemite University, Zarqa, Jordan | ||
| چکیده | ||
| Background: Arabic Sign Language (ArSL) recognition remains limited in terms of technological development, compared to American Sign Language (ASL). This disparity restricts communication accessibility for individuals with hearing impairments in Arabic-speaking regions, in offline environments with limited computational resources. Objective: This study aimed to develop a robust offline recognition system for ArSL by integrating Principal Component Analysis (PCA) for dimensionality reduction, Scale-Invariant Feature Transform (SIFT) for feature extraction, and Convolutional Neural Networks (CNNs) for gesture classification. Material and Methods: This experimental, quantitative research used a curated dataset of ArSL gestures, obtained from Kaggle. Preprocessing involved normalization, contrast enhancement, and noise reduction. SIFT was used to extract invariant features, while PCA reduced computational complexity. CNN architectures were trained to recognize gestures, assessed using accuracy, precision, recall, F1-score, loss, confusion matrix, and Receiver Operating Characteristic (ROC) curve. Results: The system achieved an accuracy of 86.64%, surpassing conventional models, such as SIFT combined with Support Vector Machines (SIFT+SVM) (84.45%). The integration of PCA and SIFT enhanced recognition efficiency and reduced model complexity. Deep learning methods showed superior adaptability and precision across gesture types. Conclusion: This study presents a robust offline ArSL recognition system that enhances communication, education, and social participation for individuals with hearing impairments in Arabic-speaking regions. | ||
| کلیدواژهها | ||
| Sign Language؛ Deep Learning؛ Image Processing؛ Computer-Assisted؛ Gesture؛ Principal Component Analysis | ||
|
آمار تعداد مشاهده مقاله: 3 تعداد دریافت فایل اصل مقاله: 1 |
||