Assistive Technologies

会議の名前
CHI 2024
Designing Upper-Body Gesture Interaction with and for People with Spinal Muscular Atrophy in VR
要旨

Recent research proposed gaze-assisted gestures to enhance interaction within virtual reality (VR), providing opportunities for people with motor impairments to experience VR. Compared to people with other motor impairments, those with Spinal Muscular Atrophy (SMA) exhibit enhanced distal limb mobility, providing them with more design space. However, it remains unknown what gaze-assisted upper-body gestures people with SMA would want and be able to perform. We conducted an elicitation study in which 12 VR-experienced people with SMA designed upper-body gestures for 26 VR commands, and collected 312 user-defined gestures. Participants predominantly favored creating gestures with their hands. The type of tasks and participants' abilities influence their choice of body parts for gesture design. Participants tended to enhance their body involvement and preferred gestures that required minimal physical effort, and were aesthetically pleasing. Our research will contribute to creating better gesture-based input methods for people with motor impairments to interact with VR.

著者
Jingze Tian
Southeast University, Nanjing, China
Yingna Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Keye Yu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Liyi Xu
Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Junan Xie
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Franklin Mingzhe Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
yafeng niu
southeast university, nanjing, jiangsu, China
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
論文URL

doi.org/10.1145/3613904.3642884

動画
Beyond Repairing with Electronic Speech: Towards Embodied Communication and Assistive Technology
要旨

Traditionally, Western philosophies have strongly favoured a dualist interpretation of consciousness - emphasising the importance of the `mind' over the `body'. However, we argue that adopted assistive technologies become embodied and extend intentionality within environments. In this paper, we restore an embodied view of the mind to theoretically enhance: understandings of assistive technology and human-human communication. Initially, we explore literature on: phenomenological theories of human experience, post-phenomenological accounts of technology, embodied accounts of assistive technology and participatory design. We then present a case study demonstrating the generative and disruptive effects of the embodied framework for co-designing AAC with people living with aphasia. Our findings show that the embodied framework supports a more multidimensional account of experience and suggests a shift from AAC devices that seek to `repair' users' speech. Reflecting on our case study, we then outline concerns with nascent technologies that could disembody and limit accessibility.

著者
Humphrey Curtis
King's College London, London, United Kingdom
Timothy Neate
King's College London, London, United Kingdom
論文URL

doi.org/10.1145/3613904.3642274

動画
People with Disabilities Redefining Identity through Robotic and Virtual Avatars: A Case Study in Avatar Robot Cafe
要旨

Robotic avatars and telepresence technology enable people with disabilities to engage in physical work. Despite the recent popularity of the metaverse, few studies have explored the use of virtual avatars and environments by people with disabilities. In this study, seven disabled participants working in a cafe where remote customer service is provided via robotic avatars, were engaged in the development and use of personalized virtual avatars displayed on a large screen in-situ in combination with existing physical robots, creating a hybrid cyber-physical space. We conducted longitudinal semi-structured interviews to investigate the psychological changes experienced by the participants. The results revealed that mass-produced robotic avatars allowed participants to not disclose their disability if they did not want to, but also backgrounded their identities; by contrast, customized virtual avatars shaped without physical constraints, highlighted their personalities. The combined use of robotic and virtual avatars complemented each other and can support pilots in redefining their identity.

著者
Yuji Hatada
The University of Tokyo, Tokyo, Japan
Giulia Barbareschi
Keio University, Yokohama, Japan
Kazuaki Takeuchi
Ory Laboratory, Tokyo, Japan
Hiroaki Kato
Ory Laboratory, Tokyo, Japan
Kentaro Yoshifuji
Ory Laboratory Inc., Minato Ward, Tokyo, Japan
Kouta Minamizawa
Keio University Graduate School of Media Design, Yokohama, Japan
Takuji Narumi
the University of Tokyo, Tokyo, Japan
論文URL

doi.org/10.1145/3613904.3642189

動画
“Can It Be Customized According to My Motor Abilities?”: Toward Designing User-Defined Head Gestures for People with Dystonia
要旨

Recent studies proposed above-the-neck gestures for people with upper-body motor impairments interacting with mobile devices without finger touch, resulting in an appropriate user-defined gesture set. However, many gestures involve sustaining eyelids in closed or open states for a period. This is challenging for people with dystonia, who have difficulty sustaining and intermitting muscle contractions. Meanwhile, other facial parts, such as the tongue and nose, can also be used to alleviate the sustained use of eyes in the interaction. Consequently, we conducted a user study inviting 16 individuals with dystonia to design gestures based on facial muscle movements for 26 common smartphone commands. We collected 416 user-defined head gestures involving facial features and shoulders. Finally, we obtained the preferred gestures set for individuals with dystonia. Participants preferred to make the gestures with their heads and use unnoticeable gestures. Our findings provide valuable references for the universal design of natural interaction technology.

著者
Qin Sun
Institute of Psychology, Beijing, China
Yunqi Hu
Institute of Psychology, Beijing, China
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Jingting Li
CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
Su-Jing Wang
Institute of Psychology, Beijing, China
論文URL

doi.org/10.1145/3613904.3642378

動画
Barriers to Photosensitive Accessibility in Virtual Reality
要旨

Virtual reality (VR) systems have grown in popularity as an immersive modality for daily activities such as gaming, socializing, and working. However, this technology is not always accessible for people with photosensitive epilepsy (PSE) who may experience seizures or other adverse symptoms when exposed to certain light stimuli (e.g., flashes or strobes). How can VR be made more inclusive and safer for people with PSE? In this paper, we report on a series of semi-structured interviews about current perceptions of accessibility in VR among people with PSE. We identify 12 barriers to accessibility that fall into four categories: physical VR equipment, VR interfaces and content, specific VR applications, and individual differences in sensitivity. Our findings allow researchers and practitioners to better understand the meaning of photosensitive accessibility in the context of VR, and provide a step towards enabling people with PSE to enjoy the benefits offered by immersive technology.

受賞
Honorable Mention
著者
Laura South
Northeastern University, Boston, Massachusetts, United States
Caglar Yildirim
Northeastern University, Boston, Massachusetts, United States
Amy Pavel
University of Texas, Austin, Austin, Texas, United States
Michelle A.. Borkin
Northeastern University, Boston, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642635

動画