この勉強会は終了しました。ご参加ありがとうございました。
Shared control wheelchairs can help users to navigate through crowds by enabling the person to drive the wheelchair while receiving support in avoiding pedestrians. To date, research into shared control has largely overlooked the perspectives of wheelchair users. In this paper, we present two studies that aim to address this gap. The first study involved a series of semi-structured interviews with wheelchair users which highlighted the presence of two different interaction loops, one between the user and the wheelchair and a second one between the user and the crowd. In the second study we engaged with wheelchair users and designers to co-design appropriate feedback loops for future shared control interaction interfaces. Based on the results of the co-design session, we present design implications for shared control wheelchair around the need for empathy, embodiment and social awareness; situational awareness and adaptability; and selective information management.
Searching for the meaning of an unfamiliar sign-language word in a dictionary is difficult for learners, but emerging sign-recognition technology will soon enable users to search by submitting a video of themselves performing the word they recall. However, sign-recognition technology is imperfect, and users may need to search through a long list of possible results when seeking a desired result. To speed this search, we present a hybrid-search approach, in which users begin with a video-based query and then filter the search results by linguistic properties, e.g., handshape. We interviewed 32 ASL learners about their preferences for the content and appearance of the search-results page and filtering criteria. A between-subjects experiment with 20 ASL learners revealed that our hybrid search system outperformed a video-based search system along multiple satisfaction and performance metrics. Our findings provide guidance for designers of video-based sign-language dictionary search systems, with implications for other search scenarios.
Collaborative writing is an integral part of academic and professional work. Although some prior research has focused on accessibility in collaborative writing, we know little about how visually impaired writers work in real-time with sighted collaborators or how online editing tools could better support their work. Grounded in formative interviews and observations with eight screen reader users, we built Co11ab, a Google Docs extension that provides configurable audio cues to facilitate understanding who is editing (or edited) what and where in a shared document. Results from a design exploration with fifteen screen reader users, including three naturalistic sessions of use with sighted colleagues, reveal how screen reader users understand various auditory representations and use them to coordinate real-time collaborative writing. We revisit what collaboration awareness means for screen reader users and discuss design considerations for future systems.
Animated GIF images have become prevalent in internet culture, often used to express richer and more nuanced meanings than static images. But animated GIFs often lack adequate alternative text descriptions, and it is challenging to generate such descriptions automatically, resulting in inaccessible GIFs for blind or low-vision (BLV) users. To improve the accessibility of animated GIFs for BLV users, we provide a system called Ga11y (pronounced ``galley''), for creating GIF annotations. Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests. We evaluated three human annotation interfaces and employ the one that yielded the best annotation quality. We also conducted a multi-stage evaluation with 12 BLV participants from the United States and China, receiving positive feedback.