この勉強会は終了しました。ご参加ありがとうございました。
Recent research proposed eyelid gestures for people with upper-body motor impairments (UMI) to interact with smartphones without finger touch. However, such eyelid gestures were designed by researchers. It remains unknown what eyelid gestures people with UMI would want and be able to perform. Moreover, other above-the-neck body parts (e.g., mouth, head) could be used to form more gestures. We conducted a user study in which 17 people with UMI designed above-the-neck gestures for 26 common commands on smartphones. We collected a total of 442 user-defined gestures involving the eyes, the mouth, and the head. Participants were more likely to make gestures with their eyes and preferred gestures that were simple, easy-to-remember, and less likely to draw attention from others. We further conducted a survey (N=24) to validate the usability and acceptance of these user-defined gestures. Results show that user-defined gestures were acceptable to both people with and without motor impairments.
We examine touchscreen stroke-gestures and mid-air motion-gestures articulated by users with upper-body motor impairments with devices worn on the wrist, finger, and head. We analyze users' gesture input performance in terms of production time, articulation consistency, and kinematic measures, and contrast the performance of users with upper-body motor impairments with that of a control group of users without impairments. Our results, from two datasets of 7,290 stroke-gestures and 3,809 motion-gestures collected from 28 participants, reveal that users with upper-body motor impairments take twice as much time to produce stroke-gestures on wearable touchscreens compared to users without impairments, but articulate motion-gestures equally fast and with similar acceleration. We interpret our findings in the context of ability-based design and propose ten implications for accessible gesture input with upper-body wearables for users with upper-body motor impairments.
Individuals with spinal cord injury (SCI) need to perform numerous
self-care behaviors, some very frequently. Pressure reliefs (PRs),
which prevent life-threatening pressure ulcers (PUs), are one such
behavior. We conducted a qualitative study with seven individuals
with severe SCI—who depend on power wheelchairs—to explore
their current PR behavior and the potential for technology to facilitate
PR adherence. While our participants were highly motivated
to perform PRs because of prior PUs, we found that their understanding
of how and when to perform a PR differed by individual,
and that while they sometimes forgot to perform PR, in other cases
contextual factors made it difficult to perform a PR. Our findings
provide insight into the complexity of this design space, identify
design considerations for designing technology to facilitate these
behaviors, and demonstrate the opportunity for personal informatics
to be more inclusive by supporting the needs of this population.
People with limited mobility often use multiple devices when interacting with computing systems, but little is known about the impact these multi-modal configurations have on daily computing use. A deeper understanding of the practices, preferences, obstacles, and workarounds associated with accessible multi-modal input can uncover opportunities to create more accessible computer applications and hardware. We explored how people with limited mobility use multi-modality through a three-part investigation grounded in the context of video games. First, we surveyed 43 people to learn about their preferred devices and configurations. Next, we conducted semi-structured interviews with 14 participants to understand their experiences and challenges with using, configuring, and discovering input setups. Lastly, we performed a systematic review of 74 YouTube videos to illustrate and categorize input setups and adaptations in-situ. We conclude with a discussion on how our findings can inform future accessibility research for current and emerging computing technologies.