Visualization and Sonification

会議の名前
CHI 2024
Glanceable Data Visualizations for Older Adults: Establishing Thresholds and Examining Disparities Between Age Groups
要旨

We present results of a replication study on smartwatch visualizations with adults aged 65 and older. The older adult population is rising globally, coinciding with their increasing interest in using small wearable devices, such as smartwatches, to track and view data. Smartwatches, however, pose challenges to this population: fonts and visualizations are often small and meant to be seen at a glance. How concise design on smartwatches interacts with aging-related changes in perception and cognition, however, is not well understood. We replicate a study that investigated how visualization type and number of data points affect glanceable perception. We observe strong evidence of differences for participants aged 75 and older, sparking interesting questions regarding the study of visualization and older adults. We discuss first steps toward better understanding and supporting an older population of smartwatch wearers and reflect on our experiences working with this population. Supplementary materials are available at \url{https://osf.io/7x4hq/}.

受賞
Honorable Mention
著者
Zack While
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Tanja Blascheck
University of Stuttgart, Stuttgart, Germany
Yujie Gong
Smith College, Northampton, Massachusetts, United States
Petra Isenberg
Université Paris-Saclay, CNRS, Orsay, France
Ali Sarvghad
University of Massachusetts Amherst, Amherst, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642776

動画
DynaVis: Dynamically Synthesized UI Widgets for Visualization Editing
要旨

Users often rely on GUIs to edit and interact with visualizations - a daunting task due to the large space of editing options. As a result, users are either overwhelmed by a complex UI or constrained by a custom UI with a tailored, fixed subset of options with limited editing flexibility. Natural Language Interfaces (NLIs) are emerging as a feasible alternative for users to specify edits. However, NLIs forgo the advantages of traditional GUI: the ability to explore and repeat edits and see instant visual feedback. We introduce DynaVis, which blends natural language and dynamically synthesized UI widgets. As the user describes an editing task in natural language, DynaVis performs the edit and synthesizes a persistent widget that the user can interact with to make further modifications. Study participants (n=24) preferred \tool over the NLI-only interface citing ease of further edits and editing confidence due to immediate visual feedback.

受賞
Best Paper
著者
Priyan Vaithilingam
Harvard University, Cambridge, Massachusetts, United States
Elena L.. Glassman
Harvard University, Cambridge, Massachusetts, United States
Jeevana Priya Inala
Microsoft, Redmond, Washington, United States
Chenglong Wang
Microsoft Research, Redmond, Washington, United States
論文URL

doi.org/10.1145/3613904.3642639

動画
Graph4GUI: Graph Neural Networks for Representing Graphical User Interfaces
要旨

Present-day graphical user interfaces (GUIs) exhibit diverse arrangements of text, graphics, and interactive elements such as buttons and menus, but representations of GUIs have not kept up. They do not encapsulate both semantic and visuo-spatial relationships among elements. To seize machine learning's potential for GUIs more efficiently, Graph4GUI exploits graph neural networks to capture individual elements' properties and their semantic-visuo-spatial constraints in a layout. The learned representation demonstrated its effectiveness in multiple tasks, especially generating designs in a challenging GUI autocompletion task, which involved predicting the positions of remaining unplaced elements in a partially completed GUI. The new model's suggestions showed alignment and visual appeal superior to the baseline method and received higher subjective ratings for preference. Furthermore, we demonstrate the practical benefits and efficiency advantages designers perceive when utilizing our model as an autocompletion plug-in.

著者
Yue Jiang
Aalto University, Espoo, Finland
Changkong Zhou
Aalto University, ESPOO, Finland
Vikas Garg
Aalto University, Espoo, Finland
Antti Oulasvirta
Aalto University, Helsinki, Finland
論文URL

doi.org/10.1145/3613904.3642822

動画
Erie: A Declarative Grammar for Data Sonification
要旨

Data sonification—mapping data variables to auditory variables, such as pitch or volume—is used for data accessibility, scientific exploration, and data-driven art (e.g., museum exhibitions) among others. While a substantial amount of research has been made on effective and intuitive sonification design, software support is not commensurate, limiting researchers from fully exploring its capabilities. We contribute Erie, a declarative grammar for data sonification, that enables abstractly expressing auditory mappings. Erie supports specifying extensible tone designs (e.g., periodic wave, sampling, frequency/amplitude modulation synthesizers), various encoding channels, auditory legends, and composition options like sequencing and overlaying. Using standard Web Audio and Web Speech APIs, we provide an Erie compiler for web environments. We demonstrate the expressiveness and feasibility of Erie by replicating research prototypes presented by prior work and provide a sonification design gallery. We discuss future steps to extend Erie toward other audio computing environments and support interactive data sonification.

著者
Hyeok Kim
Northwestern University, Evanston, Illinois, United States
Yea-Seul Kim
University of Wisconsin-Madison, Madison, Wisconsin, United States
Jessica Hullman
Northwestern University, Evanston, Illinois, United States
論文URL

doi.org/10.1145/3613904.3642442

動画
“It is hard to remove from my eye”: Design Makeup Residue Visualization System for Chinese Traditional Opera (Xiqu) Performers
要旨

Chinese traditional opera (Xiqu) performers often experience skin problems due to the long-term use of heavy-metal-laden face paints. To explore the current skincare challenges encountered by Xiqu performers, we conducted an online survey (N=136) and semi-structured interviews (N=15) as a formative study. We found that incomplete makeup removal is the leading cause of human-induced skin problems, especially the difficulty in removing eye makeup. Therefore, we proposed EyeVis, a prototype that can visualize the residual eye makeup and record the time make-up was worn by Xiqu performers. We conducted a 7-day deployment study (N=12) to evaluate EyeVis. Results indicate that EyeVis helps to increase Xiqu performers' awareness about removing makeup, as well as boosting their confidence and security in skincare. Overall, this work also provides implications for studying the work of people who wear makeup on a daily basis, and helps to promote and preserve the intangible cultural heritage of practitioners.

受賞
Honorable Mention
著者
Zeyu Xiong
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Shihan Fu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Yanying Zhu
Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Chenqing Zhu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, -Select-, China
Xiaojuan Ma
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
論文URL

doi.org/10.1145/3613904.3642261

動画