There is a gap between how people explore data and how Jupyter-like computational notebooks are designed. People explore data nonlinearly, using execution undos, branching, and/or complete reverts, whereas notebooks are designed for sequential exploration. Recent works like ForkIt are still insufficient to support these multiple modes of nonlinear exploration in a unified way. In this work, we address the challenge by introducing two dimensional code+data space versioning for computational notebooks and verifying its effectiveness using our prototype system, Kishuboard, which integrates with Jupyter. By adjusting code and data knobs, users of Kishuboard can intuitively manage the state of computational notebooks in a flexible way, thereby achieving both execution rollbacks and checkouts across complex multi-branch exploration history. Moreover, this two-dimensional versioning mechanism can easily be presented along with a friendly one-dimensional history. Human subject studies indicate that Kishuboard significantly enhances user productivity in various data science tasks.
https://dl.acm.org/doi/10.1145/3706598.3714141
We present HaptiCoil, an embedded system and interaction method for prototyping low-cost, compact, and customizable wide bandwidth (1-500 Hz) soft haptic buttons. HaptiCoil devices are built using mass-produced, waterproof planar micro-speakers which are adapted to direct energy to the skin using a novel hydraulic coupling mechanism. They can sense force input, using a measurement of self-inductance, and provide output in a single package, yielding a flexible all-in-one button solution. Our devices offer a wider perceptual range of tactile stimuli than industry standard approaches, while maintaining comparable power threshold levels (typical threshold under 40 mW). We detail the construction and underlying principles of our approach, as well as an extensive physical quantification of both input and output. We share psychophysical data on device bandwidth, and show three illustrative examples of how HaptiCoil buttons can implemented in use cases such as spatial computing, digital inking, and remote control.
https://dl.acm.org/doi/10.1145/3706598.3713175
Creating animation takes time, effort, and technical expertise. To help novices with animation, we present LogoMotion, an AI code generation approach that helps users create semantically meaningful animation for logos. LogoMotion automatically generates animation code with a method called visually-grounded code synthesis and program repair. This method performs visual analysis, instantiates a design concept, and conducts visual checking to generate animation code. LogoMotion provides novices with code-connected AI editing widgets that help them edit the motion, grouping, and timing of their animation. In a comparison study on 276 animations, LogoMotion was found to produce more content-aware animation than an industry-leading tool. In a user evaluation (n=16) comparing against a prompt-only baseline, these code-connected widgets helped users edit animations with control, iteration, and creative expression.
https://dl.acm.org/doi/10.1145/3706598.3714155
Fully autonomous teams of LLM-powered AI agents are emerging that collaborate to perform complex tasks for users. What challenges do developers face when trying to build and debug these AI agent teams? In formative interviews with five AI agent developers, we identify core challenges: difficulty reviewing long agent conversations to localize errors, lack of support in current tools for interactive debugging, and the need for tool support to iterate on agent configuration. Based on these needs, we developed an interactive multi-agent debugging tool, AGDebugger, with a UI for browsing and sending messages, the ability to edit and reset prior agent messages, and an overview visualization for navigating complex message histories. In a two-part user study with 14 participants, we identify common user strategies for steering agents and highlight the importance of interactive message resets for debugging. Our studies deepen understanding of interfaces for debugging increasingly important agentic workflows.
https://dl.acm.org/doi/10.1145/3706598.3713581
AI programming tools enable powerful code generation, and recent prototypes attempt to reduce user effort with proactive AI agents, but their impact on programming workflows remains unexplored. We introduce and evaluate Codellaborator, a design probe LLM agent that initiates programming assistance based on editor activities and task context. We explored three interface variants to assess trade-offs between increasingly salient AI support: prompt-only, proactive agent, and proactive agent with presence and context (Codellaborator). In a within-subject study (N=18), we find that proactive agents increase efficiency compared to prompt-only paradigm, but also incur workflow disruptions. However, presence indicators and interaction context support alleviated disruptions and improved users' awareness of AI processes. We underscore trade-offs of Codellaborator on user control, ownership, and code understanding, emphasizing the need to adapt proactivity to programming processes. Our research contributes to the design exploration and evaluation of proactive AI systems, presenting design implications on AI-integrated programming workflow.
https://dl.acm.org/doi/10.1145/3706598.3713357
In this work, we explore explicit Large Language Model (LLM)-powered support for the iterative design of computer programs. Program design, like other design activity, is characterized by navigating a space of alternative problem formulations and associated solutions in an iterative fashion. LLMs are potentially powerful tools in helping this exploration; however, by default, code-generation LLMs deliver code that represents a particular point solution. This obscures the larger space of possible alternatives, many of which might be preferable to the LLM’s default interpretation and its generated code. We contribute an IDE that supports program design through generating and showing new ways to frame problems alongside alternative solutions, tracking design decisions, and identifying implicit decisions made by either the programmer or the LLM. In a user study, we find that with our IDE, users combine and parallelize design phases to explore a broader design space---but also struggle to keep up with LLM-originated changes to code and other information overload. These findings suggest a core challenge for future IDEs that support program design through higher-level instructions given to LLM-based agents: carefully managing attention and deciding what information agents should surface to program designers and when.
https://dl.acm.org/doi/10.1145/3706598.3714154
The recent surge of research on software developer mental health challenges highlights the importance and urgency of studying solutions to support developer wellbeing. Self-Determination Theory (SDT) offers a valuable framework for exploring wellbeing at work, emphasizing the need to satisfy three psychological needs: autonomy, competence, and relatedness. This paper presents an interview study with 31 software developers in the United States that uses SDT as a guide, exploring how these three needs are perceived and influenced in the work of software developers. We identify specific factors and processes at work and work tools and designs that impact developers’ psychological needs and satisfaction. Results from our study can help design targeted solutions to satisfy developers’ psychological needs, which indirectly support developer wellbeing. This paper highlights the necessity of healthy work cultures in software development and presents design considerations for creating tools for developers.
https://dl.acm.org/doi/10.1145/3706598.3713250