Recent advancements in text-to-music generative AI (GenAI) have significantly expanded access to music creation. However, deaf and hard of hearing (DHH) individuals remain largely excluded from these developments. This study explores how music GenAI could enhance the music-making experience of DHH individuals, who often rely on hearing people to translate sounds and music. We developed a multimodal music-making assistive tool informed by focus group interviews. This tool enables DHH users to create and edit music independently through language interaction with music GenAI, supported by integrated visual and tactile feedback. Our findings from the music-making study revealed that the system empowers them to engage in independent and proactive music-making activities, increasing their confidence, fostering musical expression, and positively shifting their attitudes toward music. Contributing to inclusive art by preserving the unique sensory characteristics of DHH individuals, this study demonstrates how music GenAI can benefit a marginalized community, fostering independent creative expression.
https://dl.acm.org/doi/10.1145/3706598.3714298
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)