この勉強会は終了しました。ご参加ありがとうございました。
While generative deep neural networks (DNNs) have demonstrated their capacity for creating novel musical compositions, less attention has been paid to the challenges and potential of co-creating with these musical AIs, especially for novices. In a needfinding study with a widely used, interactive musical AI, we found that the AI can overwhelm users with the amount of musical content it generates, and frustrate them with its non-deterministic output. To better match co-creation needs, we developed AI-steering tools, consisting of Voice Lanes that restrict content generation to particular voices; Example-Based Sliders to control the similarity of generated content to an existing example; Semantic Sliders to nudge music generation in high-level directions (happy/sad, conventional/surprising); and Multiple Alternatives of generated content to audition and choose from. In a summative study (N=21), we discovered the tools not only increased users' trust, control, comprehension, and sense of collaboration with the AI, but also contributed to a greater sense of self-efficacy and ownership of the composition relative to the AI.
Computerized solutions in the domain of creativity and expressive performance increasingly provide art and artists with exciting new opportunities. However, the combination of automation and creativity also raises controversies and resistance in some user groups. This paper considers the case of software-generated visuals in live music performance and tries to make sense of the ambivalent response given by its intended users (i.e., DJs and VJs). We carried out seven face-to-face interviews, an online survey (N = 102) and 25 interviews at a distance to unravel DJs' and VJs' positions on automated visual software. Four core controversies were eventually identified, gravitating around the implications of using such software on DJs' and VJs' identities as artists and on their competitive advantage in their activity sector. The conclusions reconnect these findings with the larger issue of understanding the users' responses to automation.
Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about human-AI interaction as well as the design and evaluation of mixed-initiative interfaces in creative practice.
Music streaming services collect listener data to support personalization and discovery of their extensive catalogs. Yet this data is typically used in ways that are not immediately apparent to listeners. We conducted design workshops with ten Spotify listeners to imagine future voice assistant (VA) interactions leveraging logged music data. We provided participants with detailed personal music listening data, such as play-counts and temporal patterns, which grounded their design ideas in their current behaviors. In the interactions participants designed, VAs did not simply speak their data out loud; instead, participants envisioned how data could implicitly support introspection, behavior change, and exploration. We present reflections on how VAs could evolve from voice-activated remote controls to intelligent music coaches and how personal data can be leveraged as a design resource.
In this paper we explore how the qualities of Autonomous Sensory Meridian Response (ASMR) media–its pairing of sonic and visual design, ability to subvert fast-paced technology for slow experiences, production of somatic responses, and attention to the everyday–might reveal new design possibilities for interactions with wearable technology. We recount our year-long design inquiry into the subject which began with an interview with a "live" ASMR creator and design probes, a series of first-person design exercises, and resulted in the creation of two interactive garments for attending, noticing, and becoming enchanted with our our everyday surroundings. We conclude by suggesting that these ASMR inspired designs cultivate personal, intimate, embodied, and felt practices of attention within our everyday, mundane, environments.