Music & sound experiences

Paper session

会議の名前
CHI 2020
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models
要旨

While generative deep neural networks (DNNs) have demonstrated their capacity for creating novel musical compositions, less attention has been paid to the challenges and potential of co-creating with these musical AIs, especially for novices. In a needfinding study with a widely used, interactive musical AI, we found that the AI can overwhelm users with the amount of musical content it generates, and frustrate them with its non-deterministic output. To better match co-creation needs, we developed AI-steering tools, consisting of Voice Lanes that restrict content generation to particular voices; Example-Based Sliders to control the similarity of generated content to an existing example; Semantic Sliders to nudge music generation in high-level directions (happy/sad, conventional/surprising); and Multiple Alternatives of generated content to audition and choose from. In a summative study (N=21), we discovered the tools not only increased users' trust, control, comprehension, and sense of collaboration with the AI, but also contributed to a greater sense of self-efficacy and ownership of the composition relative to the AI.

キーワード
Human-AI Interaction
Generative Deep Neural Networks
Co-Creation
著者
Ryan Louie
Northwestern University, Evanston, IL, USA
Andy Coenen
Google Research, Mountain View, CA, USA
Cheng Zhi Huang
Independent Researcher, Mountain View, CA, USA
Michael Terry
Google Research, Cambridge, MA, USA
Carrie J. Cai
Google Research, Mountain View, CA, USA
DOI

10.1145/3313831.3376739

論文URL

https://doi.org/10.1145/3313831.3376739

動画
Automation and Creativity: A Case Study of DJs' and VJs' Ambivalent Positions on Automated Visual Software
要旨

Computerized solutions in the domain of creativity and expressive performance increasingly provide art and artists with exciting new opportunities. However, the combination of automation and creativity also raises controversies and resistance in some user groups. This paper considers the case of software-generated visuals in live music performance and tries to make sense of the ambivalent response given by its intended users (i.e., DJs and VJs). We carried out seven face-to-face interviews, an online survey (N = 102) and 25 interviews at a distance to unravel DJs' and VJs' positions on automated visual software. Four core controversies were eventually identified, gravitating around the implications of using such software on DJs' and VJs' identities as artists and on their competitive advantage in their activity sector. The conclusions reconnect these findings with the larger issue of understanding the users' responses to automation.

キーワード
Automation
Creativity
Visual software
Live music performance
Acceptance
Ambivalence
Argumentation
著者
Anna Spagnolli
University of Padova, Padova, Italy
Diletta Mora
University of Padova, Padova, Italy
Matteo Fanchin
University of Padova, Padova, Italy
Valeria Orso
University of Padova, Padova, Italy
Luciano Gamberini
University of Padova, Padova, Italy
DOI

10.1145/3313831.3376463

論文URL

https://doi.org/10.1145/3313831.3376463

Music Creation by Example
要旨

Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about human-AI interaction as well as the design and evaluation of mixed-initiative interfaces in creative practice.

キーワード
Music Generation
Artificial Intelligence
Mixed-Initiative Interaction
Algorithmic Composition
著者
Emma Frid
KTH Royal Institute of Technology, Stockholm, Sweden
Celso Gomes
Adobe Research, Seattle, WA, USA
Zeyu Jin
Adobe Research, San Francisco, CA, USA
DOI

10.1145/3313831.3376514

論文URL

https://doi.org/10.1145/3313831.3376514

動画
Giving Voice to Silent Data: Designing with Personal Music Listening History
要旨

Music streaming services collect listener data to support personalization and discovery of their extensive catalogs. Yet this data is typically used in ways that are not immediately apparent to listeners. We conducted design workshops with ten Spotify listeners to imagine future voice assistant (VA) interactions leveraging logged music data. We provided participants with detailed personal music listening data, such as play-counts and temporal patterns, which grounded their design ideas in their current behaviors. In the interactions participants designed, VAs did not simply speak their data out loud; instead, participants envisioned how data could implicitly support introspection, behavior change, and exploration. We present reflections on how VAs could evolve from voice-activated remote controls to intelligent music coaches and how personal data can be leveraged as a design resource.

キーワード
voice assistants
co-design
participatory design
personal informatics
music
speculative design
著者
Jordan Wirfs-Brock
University of Colorado, Boulder, Boulder, CO, USA
Sarah Mennicken
Spotify, San Francisco, CA, USA
Jennifer Thom
Spotify, Boston, MA, USA
DOI

10.1145/3313831.3376493

論文URL

https://doi.org/10.1145/3313831.3376493

動画
What HCI Can Learn from ASMR: Becoming Enchanted with the Mundane
要旨

In this paper we explore how the qualities of Autonomous Sensory Meridian Response (ASMR) media–its pairing of sonic and visual design, ability to subvert fast-paced technology for slow experiences, production of somatic responses, and attention to the everyday–might reveal new design possibilities for interactions with wearable technology. We recount our year-long design inquiry into the subject which began with an interview with a "live" ASMR creator and design probes, a series of first-person design exercises, and resulted in the creation of two interactive garments for attending, noticing, and becoming enchanted with our our everyday surroundings. We conclude by suggesting that these ASMR inspired designs cultivate personal, intimate, embodied, and felt practices of attention within our everyday, mundane, environments.

キーワード
ASMR Media
Sonic Interaction
Wearable Technology
Enchantment
Smart Textiles
著者
Josephine Klefeker
University of Colorado Boulder, Boulder, CO, USA
libi\t striegl
University of Colorado Boulder, Boulder, CO, USA
Laura Devendorf
University of Colorado Boulder, Boulder, CO, USA
DOI

10.1145/3313831.3376741

論文URL

https://doi.org/10.1145/3313831.3376741