DoodleTunes: Interactive Visual Analysis of Music-Inspired Children Doodles with Automated Feature Annotation

要旨

Music and visual arts are essential in children's arts education, and their integration has garnered significant attention. Existing data analysis methods for exploring audio-visual correlations are limited. Yet, relevant research is necessary for innovating and promoting arts integration courses. In our work, we collected substantial volumes of music-inspired doodles created by children and interviewed education experts to comprehend the challenges they encountered in the relevant analysis. Based on the insights we obtained, we designed and constructed an interactive visualization system DoodleTunes. DoodleTunes integrates deep learning-driven methods for automatically annotating several types of data features. The visual designs of the system are based on a four-level analysis structure to construct a progressive workflow, facilitating data exploration and insight discovery between doodle images and corresponding music pieces. We evaluated the accuracy of our feature prediction results and collected usage feedback on DoodleTunes from five domain experts.

著者
Shuqi Liu
East China Normal University, Shanghai, China
Jia Bu
East China Normal University, Shanghai, China
Huayuan Ye
East China Normal University, Shanghai, China
Juntong Chen
East China Normal University, Shanghai, Shanghai, China
Shiqi Jiang
East China Normal University, Shanghai, China
Mingtian Tao
East China Normal University, Shanghai, China
Liping Guo
East China Normal University, Shanghai, China
Changbo Wang
East China Normal University, Shanghai, China
Chenhui Li
East China Normal University, Shanghai, China
論文URL

https://doi.org/10.1145/3613904.3642346

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Sound, Rhythm, Movement

316C
5 件の発表
2024-05-15 01:00:00
2024-05-15 02:20:00