この勉強会は終了しました。ご参加ありがとうございました。
Deaf and hard of hearing (DHH) people enjoy music and access it using a music-sensory substitution system that delivers sound together with the corresponding visual and tactile feedback. However, it is often challenging for them to comprehend the colorful visuals and strong vibrations that are designed to represent music. We confirmed that it is necessary to conceptualize cross-modal mapping before experiencing music sensory substitution through focus group interviews with 24 DHH people. To improve the music appreciation experience, a cross-modal music conceptualization system was implemented herein, which is a prototype that allows DHH people to explore the visuals and vibrations associated with music to perceive and appreciate. An evaluation with 28 DHH individuals demonstrated the capability of the system to improve subjective music appreciation experience via music-sensory substitution. Eventually, DHH people with negative attitudes toward music became positive in the exploration and customization process with our system.
The development of cancer is difficult to express on a simple and intuitive level due to its complexity. Since cancer is so widespread, raising public awareness about its mechanisms can help those affected cope with its realities, as well as inspire others to make lifestyle adjustments and screen for the disease. Unfortunately, studies have shown that cancer literature is too technical for the general public to understand. We found that musification, the process of turning data into music, remains an unexplored avenue for conveying this information. We explore the pedagogical effectiveness of musification through the use of an algorithm that manipulates a piece of music in a manner analogous to the development of cancer. We conducted two lab studies and found that our approach is marginally more effective at promoting cancer literacy when accompanied by a text-based article than text-based articles alone.
Audio notifications provide users with an efficient way to access information beyond their current focus of attention.
Current notification delivery methods,
like phone ringtones, are primarily optimized for high noticeability, enhancing situational awareness in some scenarios but causing disruption and annoyance in others. In this work, we build on the observation that music listening is now a commonplace practice and present MARingBA, a novel approach that blends ringtones into background music to modulate their noticeability. We contribute a design space exploration of music-adaptive manipulation parameters, including beat matching, key matching, and timbre modifications, to tailor ringtones to different songs. Through two studies, we demonstrate that MARingBA supports content creators in authoring audio notifications that fit low, medium, and high levels of urgency and noticeability. Additionally, end users prefer music-adaptive audio notifications over conventional delivery methods, such as volume fading.
Composers use music notation programs throughout their creative process. Those programs are essentially elaborate structured document editors that enable composers to create high-quality scores by enforcing musical notation rules. They effectively support music engraving, but impede the more creative stages in the composition process because of their lack of flexibility. Composers thus often combine these desktop tools with other mediums such as paper. Interactive surfaces that support pen and touch input have the potential to address the tension between the contradicting needs for structure and flexibility. We interview nine professional composers. We report insights about their thought process and creative intentions, and rely on the ``Cognitive Dimensions of Notations'' framework to capture the frictions they experience when materializing those intentions on a score. We then discuss how interactive surfaces could increase flexibility by temporarily breaking the structure when manipulating the notation.
The practice of sound design involves creating and manipulating environmental sounds for music, films, or games. Recently, an increasing number of studies have adopted generative AI to assist in sound design co-creation. Most of these studies focus on the needs of novices, and less on the pragmatic needs of sound design practitioners. In this paper, we aim to understand how generative AI models might support sound designers in their practice. We designed two interactive generative AI models as Creative Support Tools (CSTs) and invited nine professional sound design practitioners to apply the CSTs in their practice. We conducted semi-structured interviews and reflected on the challenges and opportunities of using generative AI in mixed-initiative interfaces for sound design. We provide insights into sound designers' expectations of generative AI and highlight opportunities to situate generative AI-based tools within the design process. Finally, we discuss design considerations for human-AI interaction researchers working with audio.