3. Learning to Learn

会議の名前
UIST 2024
Patterns of Hypertext-Augmented Sensemaking
要旨

The early days of HCI were marked by bold visions of hypertext as a transformative medium for augmented sensemaking, exemplified in systems like Memex, Xanadu, and NoteCards. Today, however, hypertext is often disconnected from discussions of the future of sensemaking. In this paper, we investigate how the recent resurgence in hypertext ``tools for thought'' might point to new directions for hypertext-augmented sensemaking. Drawing on detailed analyses of guided tours with 23 scholars, we describe hypertext-augmented use patterns for dealing with the core problem of revisiting and reusing existing/past ideas during scholarly sensemaking. We then discuss how these use patterns validate and extend existing knowledge of hypertext design patterns for sensemaking, and point to new design opportunities for augmented sensemaking.

著者
Siyi Zhu
University of Maryland College Park, College Park, Maryland, United States
Robert Haisfield
WebSim, San Francisco, California, United States
Brendan Langen
Heyday, San Francisco, California, United States
Joel Chan
University of Maryland, College Park, Maryland, United States
論文URL

https://doi.org/10.1145/3654777.3676338

動画
Augmented Physics: Creating Interactive and Embedded Physics Simulations from Static Textbook Diagrams
要旨

We introduce Augmented Physics, a machine learning-integrated authoring tool designed for creating embedded interactive physics simulations from static textbook diagrams. Leveraging recent advancements in computer vision, such as Segment Anything and Multi-modal LLMs, our web-based system enables users to semi-automatically extract diagrams from physics textbooks and generate interactive simulations based on the extracted content. These interactive diagrams are seamlessly integrated into scanned textbook pages, facilitating interactive and personalized learning experiences across various physics concepts, such as optics, circuits, and kinematics. Drawing from an elicitation study with seven physics instructors, we explore four key augmentation strategies: 1) augmented experiments, 2) animated diagrams, 3) bi-directional binding, and 4) parameter visualization. We evaluate our system through technical evaluation, a usability study (N=12), and expert interviews (N=12). Study findings suggest that our system can facilitate more engaging and personalized learning experiences in physics education.

受賞
Best Paper
著者
Aditya Gunturu
University of Calgary, Calgary, Alberta, Canada
Yi Wen
City University of Hong Kong, Hong Kong, State/Territory, Hong Kong
Nandi Zhang
University of Calgary, Calgary, Alberta, Canada
Jarin Thundathil
University of Calgary, Calgary, Alberta, Canada
Rubaiat Habib Kazi
Adobe Research, Seattle, Washington, United States
Ryo Suzuki
University of Calgary, Calgary, Alberta, Canada
論文URL

https://doi.org/10.1145/3654777.3676392

動画
Qlarify: Recursively Expandable Abstracts for Dynamic Information Retrieval over Scientific Papers
要旨

Navigating the vast scientific literature often starts with browsing a paper’s abstract. However, when a reader seeks additional information, not present in the abstract, they face a costly cognitive chasm during their dive into the full text. To bridge this gap, we introduce recursively expandable abstracts, a novel interaction paradigm that dynamically expands abstracts by progressively incorporating additional information from the papers’ full text. This lightweight interaction allows scholars to specify their information needs by quickly brushing over the abstract or selecting AI-suggested expandable entities. Relevant information is synthesized using a retrieval-augmented generation approach, presented as a fluid, threaded expansion of the abstract, and made efficiently verifiable via attribution to relevant source-passages in the paper. Through a series of user studies, we demonstrate the utility of recursively expandable abstracts and identify future opportunities to support low-effort and just-in-time exploration of long-form information contexts through LLM-powered interactions.

著者
Raymond Fok
University of Washington, Seattle, Washington, United States
Joseph Chee Chang
Allen Institute for AI, Seattle, Washington, United States
Tal August
Allen Institute for AI, Seattle, Washington, United States
Amy X.. Zhang
University of Washington, Seattle, Washington, United States
Daniel S. Weld
Allen Institute for Artificial Intelligence, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3654777.3676397

動画
LessonPlanner: Assisting Novice Teachers to Prepare Pedagogy-Driven Lesson Plans with Large Language Models
要旨

Preparing a lesson plan, e.g., a detailed road map with strategies and materials for instructing a 90-minute class, is beneficial yet challenging for novice teachers. Large language models (LLMs) can ease this process by generating adaptive content for lesson plans, which would otherwise require teachers to create from scratch or search existing resources. In this work, we first conduct a formative study with six novice teachers to understand their needs for support of preparing lesson plans with LLMs. Then, we develop LessonPlanner that assists users to interactively construct lesson plans with adaptive LLM-generated content based on Gagne's nine events. Our within-subjects study (N=12) shows that compared to the baseline ChatGPT interface, LessonPlanner can significantly improve the quality of outcome lesson plans and ease users' workload in the preparation process. Our expert interviews (N=6) further demonstrate LessonPlanner's usefulness in suggesting effective teaching strategies and meaningful educational resources. We discuss concerns on and design considerations for supporting teaching activities with LLMs.

著者
Haoxiang Fan
Sun Yat-sen University, Zhuhai, Guangdong Province, China
Guanzheng Chen
School of Artificial Intelligence, Zhuhai, Guangdong Province, China
Xingbo Wang
Cornell University, New York, New York, United States
Zhenhui Peng
Sun Yat-sen University, Zhuhai, Guangdong Province, China
論文URL

https://doi.org/10.1145/3654777.3676390

動画