Evaluating Singing for Computer Input Using Pitch, Interval, and Melody

要旨

In voice-based interfaces, non-verbal features represent a simple and underutilized design space for hands-free, language-agnostic interactions. We evaluate the performance of three fundamental types of voice-based musical interactions: pitch, interval, and melody. These interactions involve singing or humming a sequence of one or more notes. A 21-person study evaluates the feasibility and enjoyability of these interactions. The top performing participants were able to perform all interactions reasonably quickly (<5s) with average error rates between 1.3% and 8.6% after training. Others improved with training but still had error rates as high as 46% for pitch and melody interactions. The majority of participants found all tasks enjoyable. Using these results, we propose design considerations for using singing interactions as well as potential use cases for both standard computers and augmented reality glasses.

著者
Graeme Zinck
University of Waterloo, Waterloo, Ontario, Canada
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517691

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Improving Input and Output

292
5 件の発表
2022-05-02 23:15:00
2022-05-03 00:30:00