Information and Visualization Interfaces

会議の名前
UIST 2022
Scholastic: Graphical Human-AI Collaboration for Inductive and Interpretive Text Analysis
要旨

Interpretive scholars generate knowledge from text corpora by manually sampling documents, applying codes, and refining and collating codes into categories until meaningful themes emerge. Given a large corpus, machine learning could help scale this data sampling and analysis, but prior research shows that experts are generally concerned about algorithms potentially disrupting or driving interpretive scholarship. We take a human-centered design approach to addressing concerns around machine-assisted interpretive research to build Scholastic, which incorporates a machine-in-the-loop clustering algorithm to scaffold interpretive text analysis. As a scholar applies codes to documents and refines them, the resulting coding schema serves as structured metadata which constrains hierarchical document and word clusters inferred from the corpus. Interactive visualizations of these clusters can help scholars strategically sample documents further toward insights. Scholastic demonstrates how human-centered algorithm design and visualizations employing familiar metaphors can support inductive and interpretive research methodologies through interactive topic modeling and document clustering.

著者
Matt-Heun Hong
University of Colorado Boulder, Boulder, Colorado, United States
Lauren A.. Marsh
University of Colorado Boulder, Boulder, Colorado, United States
Jessica L.. Feuston
University of Colorado Boulder, Boulder, Colorado, United States
Janet Ruppert
CU Boulder, Boulder, Colorado, United States
Jed R.. Brubaker
University of Colorado Boulder, Boulder, Colorado, United States
Danielle Albers. Szafir
University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
論文URL

https://doi.org/10.1145/3526113.3545681

Wikxhibit: Using HTML and Wikidata to Author Applications that Link Data Across the Web
要旨

Wikidata is a companion to Wikipedia that captures a substantial part of the information about most Wikipedia entities in a machine-readable structured form. In addition to directly representing information from Wikipedia itself, Wikidata also cross-references how additional information about these entities can be accessed through APIs on hundreds of other websites. This trove of valuable information has become a source of numerous domain-specific information presentations on the web, such as art galleries or directories of actors. Developers have created a number of such tools that present Wikidata data, sometimes combined with data accessed through Wikidata's cross-referenced web APIs. However, the creation of these presentations requires significant programming effort and is often impossible for non-programmers. In this work, we empower users, even non-programmers, to create presentations of Wikidata and other sources of data on the web, using only HTML with no additional programming. We present Wikxhibit, a JavaScript library for creating HTML-based data presentations of Wikidata and the other data APIs it cross-references. Wikxhibit allows a user to author plain HTML that, with the addition of a few new attributes, is able to dynamically fetch and display any Wikidata data or its cross-referenced Web APIs. Wikxhibit's JavaScript library uses Wikidata as the bridge to connect all the cross-referenced web APIs, allowing users to aggregate data from multiple Web APIs at once, seamlessly connecting object to object, without even realizing that they are pulling data from multiple websites. We integrate Wikxhibit with Mavo, an HTML language extension for describing web applications declaratively, to empower plain-HTML authors to create presentations of Wikidata. Our evaluation shows that users, even non-programmers, can create presentations of Wikidata and other sources of web data using Wikxhibit in just 5 minutes.

著者
Tarfah Alrashed
MIT, Cambridge, Massachusetts, United States
Lea Verou
MIT, Cambridge, Massachusetts, United States
David R. Karger
MIT, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3526113.3545706

Wigglite: Low-cost Information Collection and Triage
要旨

Consumers conducting comparison shopping, researchers making sense of competitive space, and developers looking for code snippets online all face the challenge of capturing the information they find for later use without interrupting their current flow. In addition, during many learning and exploration tasks, people need to externalize their mental context, such as estimating how urgent a topic is to follow up on, or rating a piece of evidence as a "pro" or "con," which helps scaffold subsequent deeper exploration. However, current approaches incur a high cost, often requiring users to select, copy, context switch, paste, and annotate information in a separate document without offering specific affordances that capture their mental context. In this work, we explore a new interaction technique called "wiggling," which can be used to fluidly collect, organize, and rate information during early sensemaking stages with a single gesture. Wiggling involves rapid back-and-forth movements of a pointer or up-and-down scrolling on a smartphone, which can indicate the information to be collected and its valence, using a single, light-weight gesture that does not interfere with other interactions that are already available. Through implementation and user evaluation, we found that wiggling helped participants accurately collect information and encode their mental context with a 58% reduction in operational cost while being 24% faster compared to a common baseline.

著者
Michael Xieyang Liu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Andrew Kuznetsov
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yongsung Kim
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Joseph Chee Chang
Semantic Scholar, Seattle, Washington, United States
Aniket Kittur
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Brad A. Myers
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3526113.3545661

MuscleRehab: Improving Unsupervised Physical Rehabilitation by Monitoring and Visualizing Muscle Engagement
要旨

Unsupervised physical rehabilitation traditionally has used motion tracking to determine correct exercise execution. However, motion tracking is not representative of the assessment of physical therapists, which focus on muscle engagement. In this paper, we investigate if monitoring and visualizing muscle engagement during unsupervised physical rehabilitation improves the execution accuracy of therapeutic exercises by showing users whether they target the right muscle groups. To accomplish this, we use wearable electrical impedance tomography (EIT) to monitor the muscle engagement and visualize the current state on a virtual muscle-skeleton avatar. We use additional optical motion tracking to also monitor the user's movement. We run a user study with 10 participants that compares exercise execution while seeing muscle + motion data vs. motion data only, and also present the recorded data to a group of physical therapists for post-rehabilitation analysis. The results indicate that monitoring and visualizing muscle engagement can improve both the therapeutic exercise accuracy for users during rehabilitation, and post-rehabilitation evaluation for physical therapists.

著者
Junyi Zhu
MIT CSAIL, Cambridge, Massachusetts, United States
Yuxuan Lei
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Aashini Shah
MIT CSAIL, Cambridge, Massachusetts, United States
Gila Schein
MIT CSAIL, Cambridge, Massachusetts, United States
Hamid Ghaednia
Massachusetts General Hospital, Boston, Massachusetts, United States
Joseph H. Schwab
Massachusetts General Hospital , Boston, Massachusetts, United States
Casper Harteveld
Northeastern University, Boston, Massachusetts, United States
Stefanie Mueller
MIT CSAIL, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3526113.3545705

Fuse: In-Situ Sensemaking Support in the Browser
要旨

People spend a significant amount of time trying to make sense of the internet, collecting content from a variety of sources and organizing it to make decisions and achieve their goals. While humans are able to fluidly iterate on collecting and organizing information in their minds, existing tools and approaches introduce significant friction into the process. We introduce Fuse, a browser extension that externalizes users’ working memory by combining low-cost collection with lightweight organization of content in a compact card-based sidebar that is always available. Fuse helps users simultaneously extract key web content and structure it in a lightweight and visual way. We discuss how these affordances help users externalize more of their mental model into the system (e.g., saving, annotating, and structuring items) and support fast reviewing and resumption of task contexts. Our 22-month public deployment and follow-up interviews provide longitudinal insights into the structuring behaviors of real-world users conducting information foraging tasks.

著者
Andrew Kuznetsov
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Joseph Chee Chang
Semantic Scholar, Seattle, Washington, United States
Nathan Hahn
US Army, Adelphi, Maryland, United States
Napol Rachatasumrit
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Bradley Breneisen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Julina Coupland
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Aniket Kittur
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3526113.3545693

Diffscriber: Describing Visual Design Changes to Support Mixed-Ability Collaborative Presentation Authoring
要旨

Visual slide-based presentations are ubiquitous, yet slide authoring tools are largely inaccessible to people who are blind or visually impaired (BVI). When authoring presentations, the 9 BVI presenters in our formative study usually work with sighted collaborators to produce visual slides based on the text content they produce. While BVI presenters valued collaborators’ visual design skills, the collaborators often felt they could not fully review and provide feedback on the visual changes that were made. We present Diffscriber, a system that identifies and describes changes to a slide’s content, layout, and style for presentation authoring. Using our system, BVI presentation authors can efficiently review changes to their presentation by navigating either a summary of high-level changes or individual slide elements. To learn more about changes of interest, presenters can use a generated change hierarchy to navigate to lower-level change details and element styles. BVI presenters using Diffscriber were able to identify slide design changes and provide feedback more easily as compared to using only the slides alone. More broadly, Diffscriber illustrates how advances in detecting and describing visual differences can improve mixed-ability collaboration.

著者
Yi-Hao Peng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jason Wu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jeffrey P. Bigham
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Amy Pavel
University of Texas, Austin, Austin, Texas, United States
論文URL

https://doi.org/10.1145/3526113.3545637