Sign language recognition in the deep learning era: A comprehensive study of model performance, robustness and deployment considerations

Walter, Ashish Kumar and Srivastava, Garima and Kumari, Lalita (2025) Sign language recognition in the deep learning era: A comprehensive study of model performance, robustness and deployment considerations. International Journal of Science and Research Archive, 15 (3). pp. 398-407. ISSN 2582-8185

[thumbnail of IJSRA-2025-1699.pdf] Article PDF
IJSRA-2025-1699.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download ( 582kB)

Abstract

This paper presents a dual-domain evaluation of classical and modern architectures for Sign Language Recognition (SLR) and Traffic Sign Classification (TSC), addressing critical challenges in accessibility and autonomous systems. We conduct a comprehensive assessment of 20 SLR models, spanning CNNs, hybrid CNN-LSTM pipelines, and transformer-based frameworks, evaluated on the Sign Language MNIST and ASL Fingerspelling datasets. Performance is measured across accuracy, computational efficiency, and robustness metrics. For TSC, we benchmark 10 models—including lightweight CNNs, vision transformers, and object detectors—on the GTSRB, BelgiumTS, and TT100K datasets. The study examines classification and detection performance under varying noise conditions to assess real-world applicability. We analyze trade-offs between model complexity, inference speed, and deployment feasibility, providing guidelines for edge-optimized implementations.

Item Type: Article
Official URL: https://doi.org/10.30574/ijsra.2025.15.3.1699
Uncontrolled Keywords: Sign Language Recognition (SLR); Traffic Sign Classification (TSC); Deep Learning Architectures; Computational Efficiency; Edge Deployment
Depositing User: Editor IJSRA
Date Deposited: 27 Jul 2025 13:32
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/2206