Current sound recognition systems for deaf and hard of hearing (DHH) people identify sound sources or discrete events. However, these systems do not distinguish similar sounding events (e.g., a patient monitor beep vs. a microwave beep). In this paper, we introduce HACS, a novel futuristic approach to designing human-AI sound awareness systems. HACS assigns AI models to identify sounds based on their characteristics (e.g., a beep) and prompts DHH users to use this information and their contextual knowledge (e.g., “I am in a kitchen”) to recognize sound events (e.g., a microwave). As a first step for implementing HACS, we articulated a sound taxonomy that classifies sounds based on sound characteristics using insights from a multi-phased research process with people of mixed hearing abilities. We then performed a qualitative (with 9 DHH people) and a quantitative (with a sound recognition model) evaluation. Findings demonstrate the initial promise of HACS for designing accurate and reliable human-AI systems.
https://doi.org/10.1145/3613904.3642062
About 20 percent of patients undergoing breast-conserving surgery require reoperation due to cancerous tissue remaining inside the breast. Breast cancer localization systems utilize auditory feedback to convey the distance between a localization probe and a small marker (seed) implanted into the breast tumor prior to surgery. However, no information on the location of the tumor margin is provided. To reduce the reoperation rate by improving the usability and accuracy of the surgical task, we developed an auditory display using shape sonification to assist with tumor margin localization. Accuracy and usability of the interactive shape sonification were determined on models of the female breast in three user studies with both breast surgeons and non-clinical participants. The comparative studies showed a significant increase in usability (p<0.05) and localization accuracy (p<0.001) of the shape sonification over the auditory feedback currently used in surgery.
https://doi.org/10.1145/3613904.3642257
This paper proposes a method for generating non-contact sensations using low-frequency sound waves without requiring user instrumentation. This method leverages the fundamental acoustic response of a confined space to produce predictable pressure spatial distributions at low frequencies, called modes. These modes can be used to produce sensations either throughout the body, in localized areas of the body, or within the body. We first validate the location and strength of the modes simulated by acoustic modeling. Next, a perceptual study is conducted to show how different frequencies produce qualitatively different sensations across and within the participants' bodies. The low-frequency sound offers a new way of delivering non-contact sensations throughout the body. The results indicate a high accuracy for predicting sensations at specific body locations.
https://doi.org/10.1145/3613904.3642311
Sound is a preferred and dominant medium that people with blindness use to capture, share and reflect on meaningful moments in their lives. Within the timeframe of 12 months, we worked with seven people with blindness and two of their sighted loved ones to engage in a multi-stage co-creative design process involving multiple steps building toward the final co-design workshop. We report three types of sonic mementos, designed together with the participants, that Encapsulate, Augment and Re-imagine personal audio recordings into more interesting and meaningful sonic memories. Building on these sonic mementos, we critically reflect and describe insights into designing sound that supports personal and social experiences of reminiscence for people with blindness through sound. We propose design opportunities to promote collective remembering between people with blindness and their sighted loved ones and design recommendations for remembering through sound.
https://doi.org/10.1145/3613904.3641940