Data Visualization: Geospatial and Multimodal

会議の名前
CHI 2024
DeepSee: Multidimensional Visualizations of Seabed Ecosystems
要旨

Scientists studying deep ocean microbial ecosystems use limited numbers of sediment samples collected from the seafloor to characterize important life-sustaining biogeochemical cycles in the environment. Yet conducting fieldwork to sample these extreme remote environments is both expensive and time consuming, requiring tools that enable scientists to explore the sampling history of field sites and predict where taking new samples is likely to maximize scientific return. We conducted a collaborative, user-centered design study with a team of scientific researchers to develop DeepSee, an interactive data workspace that visualizes 2D and 3D interpolations of biogeochemical and microbial processes in context together with sediment sampling history overlaid on 2D seafloor maps. Based on a field deployment and qualitative interviews, we found that DeepSee increased the scientific return from limited sample sizes, catalyzed new research workflows, reduced long-term costs of sharing data, and supported teamwork and communication between team members with diverse research goals.

著者
Adam J. Coscia
Georgia Institute of Technology, Atlanta, Georgia, United States
Haley M. Sapers
California Institute of Technology, Pasadena, California, United States
Noah X.. Deutsch
Harvard University, Cambridge, Massachusetts, United States
Malika Khurana
The New York Times, New York, New York, United States
John S.. Magyar
California Institute of Technology, Pasadena, California, United States
Sergio A. Parra
California Institute of Technology , Pasadena, California, United States
Daniel Utter
California Institute of Technology, Pasadena, California, United States
Rebecca L. Wipfler
California Institute of Technology, Pasadena, California, United States
David W. Caress
Monterey Bay Aquarium Research Insitute, Moss Landing, California, United States
Eric J.. Martin
Monterey Bay Aquarium Research Institute, Moss Landing, California, United States
Jennifer B. Paduan
Monterey Bay Aquarium Research Institute, Moss Landing, California, United States
Maggie Hendrie
Art Center College of Design, Pasadena, California, United States
Santiago V. Lombeyda
California Institute of Technology, Pasadena, California, United States
Hillary Mushkin
California Institute of Technology (Caltech), Pasadena, California, United States
Alex Endert
Georgia Institute of Technology, Atlanta, Georgia, United States
Scott Davidoff
California Institute of Technology, Pasadena, California, United States
Victoria J. Orphan
California Institute of Technology, Pasadena, California, United States
論文URL

https://doi.org/10.1145/3613904.3642001

動画
SalienTime: User-driven Selection of Salient Time Steps for Large-Scale Geospatial Data Visualization
要旨

The voluminous nature of geospatial temporal data from physical monitors and simulation models poses challenges to efficient data access, often resulting in cumbersome temporal selection experiences in web-based data portals. Thus, selecting a subset of time steps for prioritized visualization and pre-loading is highly desirable. Addressing this issue, this paper establishes a multifaceted definition of salient time steps via extensive need-finding studies with domain experts to understand their workflows. Building on this, we propose a novel approach that leverages autoencoders and dynamic programming to facilitate user-driven temporal selections. Structural features, statistical variations, and distance penalties are incorporated to make more flexible selections. User-specified priorities, spatial regions, and aggregations are used to combine different perspectives. We design and implement a web-based interface to enable efficient and context-aware selection of time steps and evaluate its efficacy and usability through case studies, quantitative evaluations, and expert interviews.

著者
Juntong Chen
East China Normal University, Shanghai, Shanghai, China
Haiwen Huang
East China Normal University, ShangHai, Shanghai, China
Huayuan Ye
East China Normal University, Shanghai, China
Zhong Peng
East China Normal University, Shanghai, Shanghai, China
Chenhui Li
East China Normal University, Shanghai, China
Changbo Wang
Depart of Software Science and Technology, Shanghai, Shanghai, China
論文URL

https://doi.org/10.1145/3613904.3642944

動画
Data Cubes in Hand: A Design Space of Tangible Cubes for Visualizing 3D Spatio-Temporal Data in Mixed Reality
要旨

Tangible interfaces in mixed reality (MR) environments allow for intuitive data interactions. Tangible cubes, with their rich interaction affordances, high maneuverability, and stable structure, are particularly well-suited for exploring multi-dimensional data types. However, the design potential of these cubes is underexplored. This study introduces a design space for tangible cubes in MR, focusing on interaction space, visualization space, sizes, and multiplicity. Using spatio-temporal data, we explored the interaction affordances of these cubes in a workshop (N=24). We identified unique interactions like rotating, tapping, and stacking, which are linked to augmented reality (AR) visualization commands. Integrating user-identified interactions, we created a design space for tangible-cube interactions and visualization. A prototype visualizing global health spending with small cubes was developed and evaluated, supporting both individual and combined cube manipulation. This research enhances our grasp of tangible interaction in MR, offering insights for future design and application in diverse data contexts.

著者
Shuqi He
Xi'an Jiaotong - Liverpool University, Suzhou, China
Haonan Yao
Xi'an Jiaotong-Liverpool University, Suzhou, China
Luyan Jiang
Xi'an Jiaotong-Liverpool University, Suzhou, China
Kaiwen Li
Xi'an Jiaotong-Liverpool University, Suzhou, China
Nan Xiang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Yue Li
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Hai-Ning Liang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Lingyun Yu
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
論文URL

https://doi.org/10.1145/3613904.3642740

動画
Understanding Reader Takeaways in Thematic Maps Under Varying Text, Detail, and Spatial Autocorrelation
要旨

Maps are crucial in conveying geospatial data in diverse contexts such as news and scientific reports. This research, utilizing thematic maps, probes deeper into the underexplored intersection of text framing and map types in influencing map interpretation. In this work, we conducted experiments to evaluate how textual detail and semantic content variations affect the quality of insights derived from map examination. We also explored the influence of explanatory annotations across different map types (e.g., choropleth, hexbin, isarithmic), base map details, and changing levels of spatial autocorrelation in the data. From two online experiments with $N=103$ participants, we found that annotations, their specific attributes, and map type used to present the data significantly shape the quality of takeaways. Notably, we found that the effectiveness of annotations hinges on their contextual integration. These findings offer valuable guidance to the visualization community for crafting impactful thematic geospatial representations.

著者
Arlen Fan
Arizona State University, Tempe, Arizona, United States
Fan Lei
Arizona State University, Tempe, Arizona, United States
Michelle Mancenido
Arizona State Unversity, Tempe, Arizona, United States
Alan MacEachren
Pennsylvania State University, University Park, Pennsylvania, United States
Ross Maciejewski
Arizona State University, Tempe, Arizona, United States
論文URL

https://doi.org/10.1145/3613904.3642132

動画
MAIDR: Making Statistical Visualizations Accessible with Multimodal Data Representation
要旨

This paper investigates new data exploration experiences that enable blind users to interact with statistical data visualizations---bar plots, heat maps, box plots, and scatter plots---leveraging multimodal data representations. In addition to sonification and textual descriptions that are commonly employed by existing accessible visualizations, our MAIDR (multimodal access and interactive data representation) system incorporates two additional modalities (braille and review) that offer complementary benefits. It also provides blind users with the autonomy and control to interactively access and understand data visualizations. In a user study involving 11 blind participants, we found the MAIDR system facilitated the accurate interpretation of statistical visualizations. Participants exhibited a range of strategies in combining multiple modalities, influenced by their past interactions and experiences with data visualizations. This work accentuates the overlooked potential of combining refreshable tactile representation with other modalities and elevates the discussion on the importance of user autonomy when designing accessible data visualizations.

著者
JooYoung Seo
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Yilin Xia
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
Bongshin Lee
Microsoft Research, Redmond, Washington, United States
Sean McCurry
TransPerfect, Denver, Colorado, United States
Yu Jun Yam
University of Illinois Urbana-Champaign, Urbana, Illinois, United States
論文URL

https://doi.org/10.1145/3613904.3642730

動画