Unblind Text Inputs: Predicting Hint-text of Text Input in Mobile Apps via LLM

要旨

Mobile apps have become indispensable for accessing and participating in various environments, especially for low-vision users. Users with visual impairments can use screen readers to read the content of each screen and understand the content that needs to be operated. Screen readers need to read the hint-text attribute in the text input component to remind visually impaired users what to fill in. Unfortunately, based on our analysis of 4,501 Android apps with text inputs, over 76% of them are missing hint-text. These issues are mostly caused by developers’ lack of awareness when considering visually impaired individuals. To overcome these challenges, we developed an LLM-based hint-text generation model called HintDroid, which analyzes the GUI information of input components and uses in-context learning to generate the hint-text. To ensure the quality of hint-text generation, we further designed a feedback-based inspection mechanism to further adjust hint-text. The automated experiments demonstrate the high BLEU and a user study further confirms its usefulness. HintDroid can not only help visually impaired individuals, but also help ordinary people understand the requirements of input components. HintDroid demo video: https://youtu.be/FWgfcctRbfI.

受賞
Honorable Mention
著者
Zhe Liu
Institute of Software, Chinese Academy of Sciences, Beijing, China
Chunyang Chen
Monash University, Melbourne, Victoria, Australia
Junjie Wang
Institute of Software Chinese Academy of Sciences, Beijing, China
Mengzhuo Chen
Institute of Software Chinese Academy of Sciences, Beijing, China
Boyu Wu
Institute of Software Chinese Academy of Sciences, Beijing, China
Yuekai Huang
University of Chinese Academy of Sciences, Beijing, China; Laboratory for Internet Software Technologies, Institute of Software Chinese Academy of Sciences, Beijing, China;, Beijing, China
Jun Hu
Institute of Software Chinese Academy of Sciences, Beijing, China
Qing Wang
Institute of Software Chinese Academy of Sciences, Beijing, China
論文URL

https://doi.org/10.1145/3613904.3642939

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Assistive Interactions: Social and Collaborative Interactions for Users Who or Blind or Low Vision

320 'Emalani Theater
5 件の発表
2024-05-16 01:00:00
2024-05-16 02:20:00