99. Automation and Gesture based interaction

“I am both here and there” Parallel Control of Multiple Robotic Avatars by Disabled Workers in a Cafe
説明

Robotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers' agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies.

日本語まとめ
読み込み中…
読み込み中…
Understanding Wheelchair Users' Preferences for On-Body, In-Air, and On-Wheelchair Gestures
説明

We present empirical results from a gesture elicitation study conducted with eleven wheelchair users that proposed on-body, in-air, and on-wheelchair gestures to effect twenty-one referents representing common actions, types of digital content, and navigation commands for interactive systems. We report a large preference for on-body (47.6%) and in-air (40.7%) compared to on-wheelchair (11.7%) gestures, mostly represented by touch input on different parts of the body and hand poses performed in mid-air with one hand. Following an agreement analysis that revealed low consensus (<5.5%) between users, although high perceived gesture ease, goodness, and social acceptability within users, we examine our participants' gesture characteristics in relation to their self-reported motor impairments, e.g., low strength, rapid fatigue, etc. We highlight the need for personalized gesture sets, tailored to and reflective of both users' preferences and specific motor abilities, an implication that we examine through the lenses of ability-based design.

日本語まとめ
読み込み中…
読み込み中…
Supporting Novices Author Audio Descriptions via Automatic Feedback
説明

Audio descriptions (AD) make videos accessible to those who cannot see them. But many videos lack AD and remain inaccessible as traditional approaches involve expensive professional production. We aim to lower production costs by involving novices in this process. We present an AD authoring system that supports novices to write scene descriptions (SD)—textual descriptions of video scenes—and convert them into AD via text-to-speech. The system combines video scene recognition and natural language processing to review novice-written SD and feeds back what to mention automatically. To assess the effectiveness of this automatic feedback in supporting novices, we recruited 60 participants to author SD with no feedback, human feedback, and automatic feedback. Our study shows that automatic feedback improves SD's descriptiveness, objectiveness, and learning quality, without affecting qualities like sufficiency and clarity. Though human feedback remains more effective, automatic feedback can reduce production costs by 45%.

日本語まとめ
読み込み中…
読み込み中…
Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments
説明

Should fully autonomous vehicles (FAVs) be designed inclusively and accessibly, independence will be transformed for millions of people experiencing transportation-limiting disabilities worldwide. Although FAVs hold promise to improve efficient transportation without intervention, a truly accessible experience must enable user input, for all people, in many driving scenarios (e.g., to alter a route or pull over during an emergency). Therefore, this paper explores desires for control in FAVs among (n=23) people who are blind and visually impaired. Results indicate strong support for control across a battery of driving tasks, as well as the need for multimodal information. These findings inspired the design and evaluation of a novel multisensory interface leveraging mid-air gestures, audio, and haptics. All participants successfully navigated driving scenarios using our gestural-audio interface, reporting high ease-of-use. Contributions include the first inclusively designed gesture set for FAV control and insight regarding supplemental haptic and audio cues.

日本語まとめ
読み込み中…
読み込み中…
ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration Systems for Blind and Low Vision Users
説明

Blind and low vision (BLV) users often rely on alt text to understand what a digital image is showing. However, recent research has investigated how touch-based image exploration on touchscreens can supplement alt text. Touchscreen-based image exploration systems allow BLV users to deeply understand images while granting a strong sense of agency. Yet, prior work has found that these systems require a lot of effort to use, and little work has been done to explore these systems' bottlenecks on a deeper level and propose solutions to these issues. To address this, we present ImageAssist, a set of three tools that assist BLV users through the process of exploring images by touch — scaffolding the exploration process. We perform a series of studies with BLV users to design and evaluate ImageAssist, and our findings reveal several implications for image exploration tools for BLV users.

日本語まとめ
読み込み中…
読み込み中…
Assistive-Technology Aided Manual Accessibility Testing in Mobile Apps, Powered by Record-and-Replay
説明

Billions of people use smartphones on a daily basis, including 15% of the world's population with disabilities. Mobile platforms encourage developers to manually assess their apps’ accessibility in the way disabled users interact with phones, i.e., through Assistive Technologies (AT) like screen readers. However, most developers only test their apps with touch gestures and do not have enough knowledge to use AT properly. Moreover, automated accessibility testing tools typically do not consider AT. This paper introduces a record-and-replay technique that records the developers' touch interactions, replays the same actions with an AT, and generates a visualized report of various ways of interacting with the app using ATs. Empirical evaluation of this technique on real-world apps revealed that while user study is the most reliable way of assessing accessibility, our technique can aid developers in detecting complex accessibility issues at different stages of development.

日本語まとめ
読み込み中…
読み込み中…