Communication

会議の名前
CHI 2026
TableTale: Reviving the Narrative Interplay Between Tables and Text in Scientific Papers
要旨

Data tables play a central role in scientific papers. However, their meaning is often co-constructed with surrounding text through narrative interplay, making comprehension cognitively demanding for readers. In this work, we explore how interfaces can better support this reading process. We conducted a formative study that revealed key characteristics of text-table narrative interplay, including linking mechanisms, multi-granularity alignments, and mention typologies, as well as a layered framework of readers’ intents. Informed by these insights, we present TableTale, an augmented reading interface that enriches text with data tables at multiple granularities, including paragraphs, sentences, and mentions. TableTale automatically constructs a document-level linking schema within the paper and progressively renders cascade visual cues on text and tables that unfold as readers move through the text. A within-subject study with 24 participants showed that TableTale reduced cognitive workload and improved reading efficiency, demonstrating its potential to enhance paper reading and inform future reading interface design.

著者
Liangwei Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Zhengxuan Zhang
Information Hub, Guangzhou, China
Yi-Fan Cao
Hong Kong University of Science and Technology, Hong Kong, China
Fugee Tsung
The Hong Kong University of Science and Technology, Hong Kong, International, China
Yuyu Luo
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
ChatLearn: Leveraging Non-Native Speaker Communication Challenges as Language Learning Opportunities
要旨

Non-native speakers (NNSs) face significant language barriers in multilingual communication with native speakers (NSs). While AI-mediated communication (AIMC) tools offer efficient one-time assistance, they often overlook opportunities for NNSs' continuous language acquisition. We introduce ChatLearn, an enhanced AIMC system that leverages NNSs' communication difficulties as learning opportunities. Beyond comprehension and expression assistance, ChatLearn simultaneously captures NNSs' language challenges, and subsequently provides them with spaced review as the conversation progresses. We conducted a mixed-methods study using a communication task with 43 NNS-NS pairs, after which ChatLearn NNSs recalled significantly more expressions than the baseline group, while there was no substantial decline in communication experience. Our findings highlight the value of contextual learning in NNS-NS communication, providing a new direction for AIMC systems that foster both immediate collaboration and continuous language development.

著者
Peinuan Qin
National University of Singapore, Singapore, Singapore
Yugin Tan
National University of Singapore, Singapore, Singapore
Jingzhu Chen
Tongji University, Shanghai, China
Nattapat Boonprakong
National University of Singapore, Singapore, Singapore
Zicheng Zhu
National University of Singapore, Singapore, Singapore
Naomi Yamashita
Kyoto University, Kyoto, Japan
YI-CHIEH LEE
National University of Singapore, Singapore, Singapore
I Feel We Are Together: How People Perceive Personalized Face-Swapped GIFs in Text-Based Communication
要旨

Nonverbal cues in text-based computer-mediated communication (CMC), initially introduced to compensate for the lack of social and emotional cues, have evolved beyond their original purpose to express user identity. In particular, embodied identity cues—such as a user’s real face—remain relatively underexplored in text-based CMC despite their potential as richer cues. Recent advances in generative AI have lowered the barrier to AI-mediated self-presentation, yet empirical research is still needed to understand how these cues operate in real interactions and how users experience and accept them. To address this gap, we investigate the social and emotional effects of face-swapped GIFs (FSGIFs) created via generative AI. In a two-phase within-subjects experiment with 32 participants (16 dyads of close acquaintances), we find that FSGIFs significantly enhance relational benefits, including greater co-presence and intimacy compared to generic GIFs. Based on these findings and insights from interviews, we discuss design implications for AI-mediated self-presentation in text-based CMC.

著者
Daeun Jeong
Pohang University of Science and Technology, Pohang, Korea, Republic of
Hyunwook Lee
Soongsil University, Seoul, Korea, Republic of
Minjeong Shin
Pohang University of Science and Technology, Pohang-si, Gyeongsangbuk-do, Korea, Republic of
Joohee Kim
Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of
Changhee Lee
Pohang University of Science and Technology, Pohang, Korea, Republic of
Sungbeom Cho
POSTECH, Pohang, Korea, Republic of
Hyotaek Jeon
POSTECH, pohang, Korea, Republic of
Seungjae Oh
Kyung Hee University, Yongin, Korea, Republic of
Sungahn Ko
POSTECH, Pohang, Korea, Republic of
Roger That: Firefighters’ Perspectives on Integrating Drones in Radio Communication
要旨

Drones are increasingly used in firefighting to support situational awareness. Yet, current designs and existing practices hinder information flow between humans and machines. Through interviews ($N=12$), we explore firefighters' perspectives on integrating drones into their main remote communication channel, namely, the radio. We examined current radio usage, the state of drone deployment, and gathered their feedback on a futuristic drone scenario. Our findings span from existing organizational and communication strategies to how drones are, and could further be, integrated into existing practices. We uncovered structured practices that could facilitate the integration of robotic agents. Firefighters further suggested specific requirements for drone communication over the radio, such as concise and timely messaging. We propose design recommendations for drones as radio-communicating agents, bridging established low-tech practices with emerging autonomy. This work demonstrates the feasibility of drones as radio-integrated teammates and establishes principles for designing them as reliable situationally-aware agents in safety-critical contexts.

著者
Tom Lautenbach
TU Wien, Vienna, Austria
Jessica R.. Cauchard
TU Wien, Vienna, Austria
Finding a Home for Voice Assistants: A Domestication Calculus Across Three Years and Thirty Households
要旨

HCI has explored voice assistant (VA) use across various social settings, highlighting their impact on personal and familial dynamics. Yet, the progressive domestication of these devices over time and their longer-term impact on relationships remain underexplored. We present findings from a three-year study of 30 households using interviews and diaries. Our analysis introduces the concept of a domestication calculus that captures how VAs find—or fail to find—a home over time through shifting spatial arrangements, relational roles, and household routines. Domestication unfolded not as a linear sequence of stages but as a dynamic process in which devices were either embedded into routines, withdrawn from use, or repurposed in response to changing circumstances. Across these trajectories, participants attributed four recurring roles to their VAs: (1) negotiators, (2) separators, (3) mediators, and (4) amplifiers of shared life. We conclude with implications for designing VAs that support long-term domestication.

著者
Mahla Alizadeh
University of Siegen, Siegen, Germany
Minha Lee
Eindhoven University of Technology, Eindhoven, Netherlands
Dave Randall
University of Siegen, Siegen, Germany
Peter Tolmie
University of Siegen, Siegen, Germany
Dominik Pins
Fraunhofer-Institute for Applied Information Technology FIT, Sankt Augustin, Germany
Gunnar Stevens
University of Siegen, Siegen, North Rhine-Westphalia, Germany
CoMap: A Collaborative 3D Sketch Mapping Game to Engage Spatial Communication in Search and Rescue
要旨

Search and rescue (SAR) is a complex teamwork environment that requires efficient spatial communication between commanders and field teams with heterogeneous perspectives and asymmetric information. Maps are central artifacts in SAR, yet they are also a space of technological tension due to constantly changing situation at disaster sites. Sketch mapping is an effective method of externalizing and communicating spatial understanding, increasing situation awareness in spatial decision-making tasks including SAR. Current paper-based sketch mapping in SAR struggles to handle the three-dimensional nature of physical space and remote collaboration. We propose CoMap, a collaborative 3D sketch mapping system validated in a virtual reality fire-rescue game. In a within-subject study with 13 commander–field team pairs, CoMap enabled more accurate and efficient spatial communication than conventional 2D sketch mapping. Communication analysis further showed that CoMap fostered proactive descriptions. We distill three design implications for next-generation mapping tools to advance SAR training and real-world operations.

受賞
Honorable Mention
著者
Tianyi Xiao
Institute of Cartography and Geoinformation, ETH Zurich, Zürich, Switzerland
Sailin Zhong
ETH Zürich, Zürich, Switzerland
Peter Kiefer
ETH Zurich, Zurich, ZH, Switzerland
Miki Mizuki
University of Zurich, Zürich, Switzerland
Phoebe O.. Toups Dugas
Monash University, Clayton, Victoria, Australia
Martin Raubal
ETH Zurich, Zurich, ZH, Switzerland
CHOIR: A Chatbot-mediated Organizational Memory Leveraging Communication in University Research Labs
要旨

University research labs often rely on chat-based platforms for communication and project management, where valuable knowledge surfaces but is easily lost in message streams. Documentation can preserve knowledge, but it requires ongoing maintenance and is challenging to navigate. Drawing on formative interviews that revealed organizational memory challenges in labs, we designed CHOIR, an LLM-based chatbot that supports organizational memory through four key functions: document-grounded Q&A, Q&A sharing for follow-up discussion, knowledge extraction from conversations, and AI-assisted document updates. We deployed CHOIR in four research labs for one month (n=21), where the lab members asked 107 questions and lab directors updated documents 38 times in the organizational memory. Our findings reveal a privacy-awareness tension: questions were asked privately, limiting directors' visibility into documentation gaps. Students often avoided contribution due to challenges in generalizing personal experiences into universal documentation. We contribute design implications for privacy-preserving awareness and supporting context-specific knowledge documentation.

受賞
Honorable Mention
著者
Sangwook Lee
Virginia Tech, Blacksburg, Virginia, United States
Adnan Abbas
Virginia Polytechnic Institute & State University (Virginia Tech), Blacksburg, Virginia, United States
Yan Chen
Virginia Tech, Blacksburg, Virginia, United States
Young-Ho Kim
NAVER AI Lab, Seongnam, Korea, Republic of
Sang Won Lee
Virginia Tech, Blacksburg, Virginia, United States
動画