Writing and AI A

会議の名前
CHI 2024
MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling
要旨

Large Language Models (LLMs) offer promising opportunities in mental health domains, although their inherent complexity and low controllability elicit concern regarding their applicability in clinical settings. We present MindfulDiary, an LLM-driven journaling app that helps psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals, MindfulDiary takes a state-based approach to safely comply with the experts' guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we examined how MindfulDiary facilitates patients' journaling practice and clinical care. The study revealed that MindfulDiary supported patients in consistently enriching their daily records and helped clinicians better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.

著者
Taewan Kim
KAIST, Daejeon, Korea, Republic of
Seolyeong Bae
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Hyun AH Kim
NAVER Cloud, Gyeonggi-do, Korea, Republic of
Su-woo Lee
Wonkwang university hospital, iksan-si, Korea, Republic of
Hwajung Hong
KAIST, Deajeon, Korea, Republic of
Chanmo Yang
Wonkwang University Hospital, Wonkwang University, Iksan, Jeonbuk, Korea, Republic of
Young-Ho Kim
NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of
論文URL

https://doi.org/10.1145/3613904.3642937

動画
Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models
要旨

Advances in language modeling have paved the way for novel human-AI co-writing experiences. This paper explores how varying levels of scaffolding from large language models (LLMs) shape the co-writing process. Employing a within-subjects field experiment with a Latin square design, we asked participants (N=131) to respond to argumentative writing prompts under three randomly sequenced conditions: no AI assistance (control), next-sentence suggestions (low scaffolding), and next-paragraph suggestions (high scaffolding). Our findings reveal a U-shaped impact of scaffolding on writing quality and productivity (words/time). While low scaffolding did not significantly improve writing quality or productivity, high scaffolding led to significant improvements, especially benefiting non-regular writers and less tech-savvy users. No significant cognitive burden was observed while using the scaffolded writing tools, but a moderate decrease in text ownership and satisfaction was noted. Our results have broad implications for the design of AI-powered writing tools, including the need for personalized scaffolding mechanisms.

著者
Paramveer Dhillon
University of Michigan, Ann Arbor, Michigan, United States
Somayeh Molaei
University of Michigan, Ann Arbor, Michigan, United States
Jiaqi Li
Information School, Ann Arbor, Michigan, United States
Maximilian Golub
University of Michigan, Ann Arbor, Michigan, United States
Shaochun Zheng
University of California, San Diego, La Jolla, California, United States
Lionel Peter. Robert
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3613904.3642134

動画
The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing with Interactive Visualization
要旨

The use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other hand, AI-assistance can improve writing as long as writers can conform to publisher policies, and as long as readers can be assured that a text has been verified by a human. We argue that a system that captures the provenance of interaction with an LLM can help writers retain their agency, conform to policies, and communicate their use of AI to publishers and readers transparently. Thus we propose HaLLMark, a tool for visualizing the writer's interaction with the LLM. We evaluated HaLLMark with 13 creative writers, and found that it helped them retain a sense of control and ownership of the text.

著者
Md Naimul Hoque
University of Maryland, College Park, Maryland, United States
Tasfia Mashiat
George Mason University, Fairfax, Virginia, United States
Bhavya Ghai
Amazon, New York, New York, United States
Cecilia D.. Shelton
University of Maryland, College Park, Maryland, United States
Fanny Chevalier
University of Toronto, Toronto, Ontario, Canada
Kari Kraus
University of Maryland, College Park, Maryland, United States
Niklas Elmqvist
Aarhus University, Aarhus, Denmark
論文URL

https://doi.org/10.1145/3613904.3641895

動画
ABScribe: Rapid Exploration & Organization of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models
要旨

Exploring alternative ideas by rewriting text is integral to the writing process. State-of-the-art Large Language Models (LLMs) can simplify writing variation generation. However, current interfaces pose challenges for simultaneous consideration of multiple variations: creating new variations without overwriting text can be difficult, and pasting them sequentially can clutter documents, increasing workload and disrupting writers' flow. To tackle this, we present ABScribe, an interface that supports rapid, yet visually structured, exploration and organization of writing variations in human-AI co-writing tasks. With ABScribe, users can swiftly modify variations using LLM prompts, which are auto-converted into reusable buttons. Variations are stored adjacently within text fields for rapid in-place comparisons using mouse-over interactions on a popup toolbar. Our user study with 12 writers shows that ABScribe significantly reduces task workload (d = 1.20, p < 0.001), enhances user perceptions of the revision process (d = 2.41, p < 0.001) compared to a popular baseline workflow, and provides insights into how writers explore variations using LLMs.

著者
Mohi Reza
University of Toronto, Toronto, Ontario, Canada
Nathan M. Laundry
University of Toronto, Toronto, Ontario, Canada
Ilya Musabirov
University of Toronto, Toronto, Ontario, Canada
Peter Dushniku
University of Toronto, Toronto, Ontario, Canada
Zhi Yuan "Michael" Yu
University of Toronto, Toronto, Ontario, Canada
Kashish Mittal
University of Toronto, Toronto, Ontario, Canada
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Michael Liut
University of Toronto Mississauga, Mississauga, Ontario, Canada
Anastasia Kuzminykh
University of Toronto, Toronto, Ontario, Canada
Joseph Jay. Williams
University of Toronto, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3613904.3641899

動画