Sound, Rhythm, Movement

会議の名前
CHI 2024
FabSound: Audio-Tactile and Affective Fabric Experiences Through Mid-air Haptics
要旨

The sound produced when touching fabrics, like a blanket, often provides information regarding the fabric’s texture properties (e.g., its roughness). Fabric roughness is one of the most important aspects of assessing fabric tactile properties. Prior research has demonstrated that touch-related sounds can alter the perception of textures. However, understanding touch-related sound of digital fabric textures, and how they could convey affective responses remain a challenge. In this study, we mapped digital fabric textures using mid-air haptics stimuli and examined how auditory manipulation influences people’s roughness perception. Through qualitative interviews, participants detailed that while rubbing sounds smoothen fabric texture perception, pure tone sounds of 450Hz and 900Hz accent roughness perception. The rubbing sound of fabric evoked associations with soft-materials and led to more calming experiences. In addition, we discussed how haptic interaction can be extended to multisensory modes, revealing a new perspective of mapping multisensory experiences for digital fabrics.

受賞
Honorable Mention
著者
Jing Xue
University College London, London, United Kingdom
Roberto Montano Murillo
Ultraleap, Bristol, United Kingdom
Christopher Dawes
University College London, London, United Kingdom
William Frier
Ultraleap, Bristol, United Kingdom
Patricia Cornelio
Ultraleap, Bristol, United Kingdom
Marianna Obrist
University College London, London, United Kingdom
論文URL

https://doi.org/10.1145/3613904.3642533

動画
Exploring Collaborative Movement Improvisation Towards the Design of LuminAI—a Co-Creative AI Dance Partner
要旨

Co-creation in embodied contexts is central to the human experience but is often lacking in our interactions with computers. We seek to develop a better understanding of embodied human co-creativity to inform the human-centered design of machines that can co-create with us. In this paper, we ask: What characterizes dancers’ experiences of embodied dyadic interaction in movement improvisation? To answer this, we ran focus groups with 24 university dance students and conducted a thematic analysis of their responses. We synthesize our findings in an interconnected model of improvisational dance inputs where movement choices are shaped by interplays such as in-the-moment influences between the self, partner, and the environment as well as a set of generative strategies and heuristics for a successful collaboration. We present a set of design recommendations for LuminAI, a co-creative AI dance partner. Our contributions can inform the design of AI in embodied co-creative domains.

著者
Milka Trajkova
Georgia Institute of Technology, Atlanta, Georgia, United States
Duri Long
Northwestern University, Evanston, Illinois, United States
Manoj Deshpande
Georgia Institute of Technology, Atlanta, Georgia, United States
Andrea Knowlton
Kennesaw State University, Kennesaw, Georgia, United States
Brian Magerko
Georgia Tech, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3613904.3642677

動画
Understanding Feedback in Rhythmic Gymnastics Training: An Ethnographic-Informed Study of a Competition Class
要旨

Rhythmic Gymnastics is an Olympic sport that demands an exceptional level of expertise. From early age, athletes relentlessly practise exercises until they can flawlessly perform them before an audience and a panel of judges. Technology can potentially support rhythmic gymnasts' training by monitoring gymnasts' exercises and providing feedback on their execution. However, the limited understanding of the training nuances in Rhythmic Gymnastics restricts the development of technologies to support training. Drawing on the observation of training sessions and on interviews with athletes and coaches, this paper uncovers how coaches personalise feedback timing, type, form, format, and quantity, to adapt it to the gymnasts' skill level and type of exercise. Taking stock of our findings, we draw out five implications that can inform the design of systems to support feedback in Rhythmic Gymnastics training.

受賞
Best Paper
著者
Leonor Portugal da Fonseca
University of Coimbra, Coimbra, Portugal
Francisco Nunes
Fraunhofer Portugal AICOS, Porto, Portugal
Paula Alexandra Silva
Universidade de Coimbra, Coimbra, Portugal
論文URL

https://doi.org/10.1145/3613904.3642434

動画
Designing and Evaluating an Advanced Dance Video Comprehension Tool with In-situ Move Identification Capabilities
要旨

Analyzing dance moves and routines is a foundational step in learning dance. Videos are often utilized at this step, and advancements in machine learning, particularly in human-movement recognition, could further assist dance learners. We developed and evaluated a Wizard-of-Oz prototype of a video comprehension tool that offers automatic in-situ dance move identification functionality. Our system design was informed by an interview study involving 12 dancers to understand the challenges they face when trying to comprehend complex dance videos and taking notes. Subsequently, we conducted a within-subject study with 8 Cuban salsa dancers to identify the benefits of our system compared to an existing traditional feature-based search system. We found that the quality of notes taken by participants improved when using our tool, and they reported a lower workload. Based on participants’ interactions with our system, we offer recommendations on how an AI-powered span-search feature can enhance dance video comprehension tools.

受賞
Honorable Mention
著者
Saad Hassan
Tulane University, New Orleans, Louisiana, United States
Caluã de Lacerda Pataca
Rochester Institute of Technology, Rochester, New York, United States
Laleh Nourian
Rochester Institute of Technology, Rochester, New York, United States
Garreth W.. Tigwell
Rochester Institute of Technology, Rochester, New York, United States
Briana Davis
Rochester Institute of Technology, Rochester, New York, United States
Will Zhenya. Silver Wagman
Tulane University, New Orleans, New York, United States
論文URL

https://doi.org/10.1145/3613904.3642710

動画
DoodleTunes: Interactive Visual Analysis of Music-Inspired Children Doodles with Automated Feature Annotation
要旨

Music and visual arts are essential in children's arts education, and their integration has garnered significant attention. Existing data analysis methods for exploring audio-visual correlations are limited. Yet, relevant research is necessary for innovating and promoting arts integration courses. In our work, we collected substantial volumes of music-inspired doodles created by children and interviewed education experts to comprehend the challenges they encountered in the relevant analysis. Based on the insights we obtained, we designed and constructed an interactive visualization system DoodleTunes. DoodleTunes integrates deep learning-driven methods for automatically annotating several types of data features. The visual designs of the system are based on a four-level analysis structure to construct a progressive workflow, facilitating data exploration and insight discovery between doodle images and corresponding music pieces. We evaluated the accuracy of our feature prediction results and collected usage feedback on DoodleTunes from five domain experts.

著者
Shuqi Liu
East China Normal University, Shanghai, China
Jia Bu
East China Normal University, Shanghai, China
Huayuan Ye
East China Normal University, Shanghai, China
Juntong Chen
East China Normal University, Shanghai, Shanghai, China
Shiqi Jiang
East China Normal University, Shanghai, China
Mingtian Tao
East China Normal University, Shanghai, China
Liping Guo
East China Normal University, Shanghai, China
Changbo Wang
East China Normal University, Shanghai, China
Chenhui Li
East China Normal University, Shanghai, China
論文URL

https://doi.org/10.1145/3613904.3642346

動画