Automation and Gesture based interaction

会議の名前
CHI 2023
“I am both here and there” Parallel Control of Multiple Robotic Avatars by Disabled Workers in a Cafe
要旨

Robotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers' agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies.

著者
Giulia Barbareschi
Keio University, Yokohama, Japan
Midori Kawaguchi
Keio University Graduate School of Media Design, Tokyo, Japan
Hiroaki Kato
Ory Laboratory, Tokyo, Japan
Masato Nagahiro
OriHime Pilots, Tokyo, Japan
Kazuaki Takeuchi
Ory Laboratory, Tokyo, Japan
Yoshifumi Shiiba
Ory Laboratory, Tokyo, Japan
Shunichi Kasahara
Sony Computer Science Laboratories, Inc., Tokyo, Japan
Kai Kunze
Keio University, Tokyo, Japan
Kouta Minamizawa
Keio University Graduate School of Media Design, Yokohama, Japan
論文URL

https://doi.org/10.1145/3544548.3581124

動画
Understanding Wheelchair Users' Preferences for On-Body, In-Air, and On-Wheelchair Gestures
要旨

We present empirical results from a gesture elicitation study conducted with eleven wheelchair users that proposed on-body, in-air, and on-wheelchair gestures to effect twenty-one referents representing common actions, types of digital content, and navigation commands for interactive systems. We report a large preference for on-body (47.6%) and in-air (40.7%) compared to on-wheelchair (11.7%) gestures, mostly represented by touch input on different parts of the body and hand poses performed in mid-air with one hand. Following an agreement analysis that revealed low consensus (<5.5%) between users, although high perceived gesture ease, goodness, and social acceptability within users, we examine our participants' gesture characteristics in relation to their self-reported motor impairments, e.g., low strength, rapid fatigue, etc. We highlight the need for personalized gesture sets, tailored to and reflective of both users' preferences and specific motor abilities, an implication that we examine through the lenses of ability-based design.

受賞
Honorable Mention
著者
Laura-Bianca Bilius
Ștefan cel Mare University of Suceava, Suceava , Romania
Ovidiu-Ciprian Ungurean
Ștefan cel Mare University of Suceava, Suceava, Romania
Radu-Daniel Vatavu
Ștefan cel Mare University of Suceava, Suceava, Romania
論文URL

https://doi.org/10.1145/3544548.3580929

動画
Supporting Novices Author Audio Descriptions via Automatic Feedback
要旨

Audio descriptions (AD) make videos accessible to those who cannot see them. But many videos lack AD and remain inaccessible as traditional approaches involve expensive professional production. We aim to lower production costs by involving novices in this process. We present an AD authoring system that supports novices to write scene descriptions (SD)—textual descriptions of video scenes—and convert them into AD via text-to-speech. The system combines video scene recognition and natural language processing to review novice-written SD and feeds back what to mention automatically. To assess the effectiveness of this automatic feedback in supporting novices, we recruited 60 participants to author SD with no feedback, human feedback, and automatic feedback. Our study shows that automatic feedback improves SD's descriptiveness, objectiveness, and learning quality, without affecting qualities like sufficiency and clarity. Though human feedback remains more effective, automatic feedback can reduce production costs by 45%.

著者
Rosiana Natalie
Singapore Management University, Singapore, Singapore
Joshua Tseng
Singapore Managamenet University, Singapore, Singapore
Hernisa Kacorri
University of Maryland, College Park, Maryland, United States
Kotaro Hara
Singapore Management University, Singapore, Singapore
論文URL

https://doi.org/10.1145/3544548.3581023

動画
Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments
要旨

Should fully autonomous vehicles (FAVs) be designed inclusively and accessibly, independence will be transformed for millions of people experiencing transportation-limiting disabilities worldwide. Although FAVs hold promise to improve efficient transportation without intervention, a truly accessible experience must enable user input, for all people, in many driving scenarios (e.g., to alter a route or pull over during an emergency). Therefore, this paper explores desires for control in FAVs among (n=23) people who are blind and visually impaired. Results indicate strong support for control across a battery of driving tasks, as well as the need for multimodal information. These findings inspired the design and evaluation of a novel multisensory interface leveraging mid-air gestures, audio, and haptics. All participants successfully navigated driving scenarios using our gestural-audio interface, reporting high ease-of-use. Contributions include the first inclusively designed gesture set for FAV control and insight regarding supplemental haptic and audio cues.

著者
Paul D. S.. Fink
The University of Maine, Orono, Maine, United States
Velin Dimitrov
Toyota Research Institute, Cambridge, Massachusetts, United States
Hiroshi Yasuda
Toyota Research Institute, Los Altos, California, United States
Tiffany L.. Chen
Toyota Research Institute, Los Altos, California, United States
Richard R.. Corey
University of Maine, Orono, Maine, United States
Nicholas A. Giudice
University of Maine, Orono, Maine, United States
Emily Sarah. Sumner
Toyota Research Institute, Los Altos, California, United States
論文URL

https://doi.org/10.1145/3544548.3580762

動画
ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration Systems for Blind and Low Vision Users
要旨

Blind and low vision (BLV) users often rely on alt text to understand what a digital image is showing. However, recent research has investigated how touch-based image exploration on touchscreens can supplement alt text. Touchscreen-based image exploration systems allow BLV users to deeply understand images while granting a strong sense of agency. Yet, prior work has found that these systems require a lot of effort to use, and little work has been done to explore these systems' bottlenecks on a deeper level and propose solutions to these issues. To address this, we present ImageAssist, a set of three tools that assist BLV users through the process of exploring images by touch — scaffolding the exploration process. We perform a series of studies with BLV users to design and evaluate ImageAssist, and our findings reveal several implications for image exploration tools for BLV users.

著者
Vishnu Nair
Columbia University, New York, New York, United States
Hanxiu 'Hazel' Zhu
Columbia University, New York, New York, United States
Brian A.. Smith
Columbia University, New York, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581302

動画
Assistive-Technology Aided Manual Accessibility Testing in Mobile Apps, Powered by Record-and-Replay
要旨

Billions of people use smartphones on a daily basis, including 15% of the world's population with disabilities. Mobile platforms encourage developers to manually assess their apps’ accessibility in the way disabled users interact with phones, i.e., through Assistive Technologies (AT) like screen readers. However, most developers only test their apps with touch gestures and do not have enough knowledge to use AT properly. Moreover, automated accessibility testing tools typically do not consider AT. This paper introduces a record-and-replay technique that records the developers' touch interactions, replays the same actions with an AT, and generates a visualized report of various ways of interacting with the app using ATs. Empirical evaluation of this technique on real-world apps revealed that while user study is the most reliable way of assessing accessibility, our technique can aid developers in detecting complex accessibility issues at different stages of development.

著者
Navid Salehnamadi
University of California, Irvine, Irvine, California, United States
Ziyao He
University of California, Irvine, Irvine, California, United States
Sam Malek
University of California Irvine, Irvine, California, United States
論文URL

https://doi.org/10.1145/3544548.3580679

動画