The sound produced when touching fabrics, like a blanket, often provides information regarding the fabric’s texture properties (e.g., its roughness). Fabric roughness is one of the most important aspects of assessing fabric tactile properties. Prior research has demonstrated that touch-related sounds can alter the perception of textures. However, understanding touch-related sound of digital fabric textures, and how they could convey affective responses remain a challenge. In this study, we mapped digital fabric textures using mid-air haptics stimuli and examined how auditory manipulation influences people’s roughness perception. Through qualitative interviews, participants detailed that while rubbing sounds smoothen fabric texture perception, pure tone sounds of 450Hz and 900Hz accent roughness perception. The rubbing sound of fabric evoked associations with soft-materials and led to more calming experiences. In addition, we discussed how haptic interaction can be extended to multisensory modes, revealing a new perspective of mapping multisensory experiences for digital fabrics.
https://doi.org/10.1145/3613904.3642533
Co-creation in embodied contexts is central to the human experience but is often lacking in our interactions with computers. We seek to develop a better understanding of embodied human co-creativity to inform the human-centered design of machines that can co-create with us. In this paper, we ask: What characterizes dancers’ experiences of embodied dyadic interaction in movement improvisation? To answer this, we ran focus groups with 24 university dance students and conducted a thematic analysis of their responses. We synthesize our findings in an interconnected model of improvisational dance inputs where movement choices are shaped by interplays such as in-the-moment influences between the self, partner, and the environment as well as a set of generative strategies and heuristics for a successful collaboration. We present a set of design recommendations for LuminAI, a co-creative AI dance partner. Our contributions can inform the design of AI in embodied co-creative domains.
https://doi.org/10.1145/3613904.3642677
Rhythmic Gymnastics is an Olympic sport that demands an exceptional level of expertise. From early age, athletes relentlessly practise exercises until they can flawlessly perform them before an audience and a panel of judges. Technology can potentially support rhythmic gymnasts' training by monitoring gymnasts' exercises and providing feedback on their execution. However, the limited understanding of the training nuances in Rhythmic Gymnastics restricts the development of technologies to support training. Drawing on the observation of training sessions and on interviews with athletes and coaches, this paper uncovers how coaches personalise feedback timing, type, form, format, and quantity, to adapt it to the gymnasts' skill level and type of exercise. Taking stock of our findings, we draw out five implications that can inform the design of systems to support feedback in Rhythmic Gymnastics training.
https://doi.org/10.1145/3613904.3642434
Analyzing dance moves and routines is a foundational step in learning dance. Videos are often utilized at this step, and advancements in machine learning, particularly in human-movement recognition, could further assist dance learners. We developed and evaluated a Wizard-of-Oz prototype of a video comprehension tool that offers automatic in-situ dance move identification functionality. Our system design was informed by an interview study involving 12 dancers to understand the challenges they face when trying to comprehend complex dance videos and taking notes. Subsequently, we conducted a within-subject study with 8 Cuban salsa dancers to identify the benefits of our system compared to an existing traditional feature-based search system. We found that the quality of notes taken by participants improved when using our tool, and they reported a lower workload. Based on participants’ interactions with our system, we offer recommendations on how an AI-powered span-search feature can enhance dance video comprehension tools.
https://doi.org/10.1145/3613904.3642710
Music and visual arts are essential in children's arts education, and their integration has garnered significant attention. Existing data analysis methods for exploring audio-visual correlations are limited. Yet, relevant research is necessary for innovating and promoting arts integration courses. In our work, we collected substantial volumes of music-inspired doodles created by children and interviewed education experts to comprehend the challenges they encountered in the relevant analysis. Based on the insights we obtained, we designed and constructed an interactive visualization system DoodleTunes. DoodleTunes integrates deep learning-driven methods for automatically annotating several types of data features. The visual designs of the system are based on a four-level analysis structure to construct a progressive workflow, facilitating data exploration and insight discovery between doodle images and corresponding music pieces. We evaluated the accuracy of our feature prediction results and collected usage feedback on DoodleTunes from five domain experts.
https://doi.org/10.1145/3613904.3642346