Sound Interaction

会議の名前
CHI 2024
Show, Not Tell: A Human-AI Collaborative Approach for Designing Sound Awareness Systems
要旨

Current sound recognition systems for deaf and hard of hearing (DHH) people identify sound sources or discrete events. However, these systems do not distinguish similar sounding events (e.g., a patient monitor beep vs. a microwave beep). In this paper, we introduce HACS, a novel futuristic approach to designing human-AI sound awareness systems. HACS assigns AI models to identify sounds based on their characteristics (e.g., a beep) and prompts DHH users to use this information and their contextual knowledge (e.g., “I am in a kitchen”) to recognize sound events (e.g., a microwave). As a first step for implementing HACS, we articulated a sound taxonomy that classifies sounds based on sound characteristics using insights from a multi-phased research process with people of mixed hearing abilities. We then performed a qualitative (with 9 DHH people) and a quantitative (with a sound recognition model) evaluation. Findings demonstrate the initial promise of HACS for designing accurate and reliable human-AI systems.

著者
Jeremy Zhengqi. Huang
University of Michigan, Ann Arbor, Michigan, United States
Reyna Wood
University of Michigan, Ann Arbor, Michigan, United States
Hriday Chhabria
University of Michigan, Ann Arbor, Michigan, United States
Dhruv Jain
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3613904.3642062

動画
Interactive Shape Sonification for Tumor Localization in Breast Cancer Surgery
要旨

About 20 percent of patients undergoing breast-conserving surgery require reoperation due to cancerous tissue remaining inside the breast. Breast cancer localization systems utilize auditory feedback to convey the distance between a localization probe and a small marker (seed) implanted into the breast tumor prior to surgery. However, no information on the location of the tumor margin is provided. To reduce the reoperation rate by improving the usability and accuracy of the surgical task, we developed an auditory display using shape sonification to assist with tumor margin localization. Accuracy and usability of the interactive shape sonification were determined on models of the female breast in three user studies with both breast surgeons and non-clinical participants. The comparative studies showed a significant increase in usability (p<0.05) and localization accuracy (p<0.001) of the shape sonification over the auditory feedback currently used in surgery.

著者
Laura Schütz
Technical University of Munich, Munich, Germany
Trishia El Chemaly
Stanford University, Stanford, California, United States
Emmanuelle Weber
Stanford University, Stanford, California, United States
Anh Thien Doan
Stanford University, Stanford, California, United States
Jacqueline Tsai
Stanford University, Stanford, California, United States
Christoph Leuze
Stanford University, Stanford, California, United States
Bruce Daniel
Stanford University, Stanford, California, United States
Nassir Navab
Technische Universität München, Garching bei München, Germany
論文URL

https://doi.org/10.1145/3613904.3642257

動画
Using Low-frequency Sound to Create Non-contact Sensations On and In the Body
要旨

This paper proposes a method for generating non-contact sensations using low-frequency sound waves without requiring user instrumentation. This method leverages the fundamental acoustic response of a confined space to produce predictable pressure spatial distributions at low frequencies, called modes. These modes can be used to produce sensations either throughout the body, in localized areas of the body, or within the body. We first validate the location and strength of the modes simulated by acoustic modeling. Next, a perceptual study is conducted to show how different frequencies produce qualitatively different sensations across and within the participants' bodies. The low-frequency sound offers a new way of delivering non-contact sensations throughout the body. The results indicate a high accuracy for predicting sensations at specific body locations.

著者
Waseem Hassan
University of Copenhagen, Copenhagen, Denmark
Asier Marzo
Universidad Publica de Navarra, Pamplona, Navarre, Spain
Kasper Hornbæk
University of Copenhagen, Copenhagen, Denmark
論文URL

https://doi.org/10.1145/3613904.3642311

動画
Remembering through Sound: Co-creating Sound-based Mementos together with People with Blindness
要旨

Sound is a preferred and dominant medium that people with blindness use to capture, share and reflect on meaningful moments in their lives. Within the timeframe of 12 months, we worked with seven people with blindness and two of their sighted loved ones to engage in a multi-stage co-creative design process involving multiple steps building toward the final co-design workshop. We report three types of sonic mementos, designed together with the participants, that Encapsulate, Augment and Re-imagine personal audio recordings into more interesting and meaningful sonic memories. Building on these sonic mementos, we critically reflect and describe insights into designing sound that supports personal and social experiences of reminiscence for people with blindness through sound. We propose design opportunities to promote collective remembering between people with blindness and their sighted loved ones and design recommendations for remembering through sound.

著者
MinYoung Yoo
Simon Fraser University, Surrey, British Columbia, Canada
William Odom
Simon Fraser University, Surrey, British Columbia, Canada
Arne Berger
Anhalt University of Applied Sciences, Koethen, Germany
Samuel Barnett
Simon Fraser University, Surrey, British Columbia, Canada
Sadhbh Kenny
Simon Fraser University , Vancouver, British Columbia, Canada
Priscilla Lo
Simon Fraser University, Surrey, British Columbia, Canada
Samien Shamsher
Simon Fraser University, Surrey, British Columbia, Canada
Gillian Russell
Simon Fraser University, Surrey, British Columbia, Canada
Lauren Knight
University of Toronto, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3613904.3641940

動画