American Sign Language (ASL) is the primary language of many Deaf and Hard of Hearing (DHH) individuals. However, existing learning resources often lack timely, individualized feedback, leaving learners uncertain about signing accuracy. We introduce a novel egocentric ASL learning system that integrates stereo vision, error detection across four manual ASL parameters (handshape, orientation, location, movement), and large language model (LLM)–driven natural language feedback. To our knowledge, this is the first system to deliver error-aware, pedagogically grounded feedback for ASL learners. A formative study with 15 ASL teachers and 30 learners (both Deaf and hearing backgrounds) supports the motivation and design goals, while a system evaluation with 13 Deaf ASL participants (novice to advanced) practicing 230 signs provides initial evidence of system feasibility and short-term, pedagogically promising behavior within the primary user community. Across two complementary studies, we identify key design principles: prioritizing reliability over sensitivity, stratifying feedback by error severity, and leveraging egocentric alignment for natural practice. Collectively, these contributions establish a foundation for scalable ASL education and provide generalizable insights for designing AI-mediated feedback in Human-Computer Interaction (HCI).
ACM CHI Conference on Human Factors in Computing Systems