Graphical user interfaces (GUIs) are at the heart of almost every software we encounter. GUIs are often created through a collaborative effort involving UX designers, product owners, and software developers, constantly facing changing requirements. Historically, problems in GUI development include a fragmented, poorly integrated tool landscape and high synchronization efforts between stakeholders. Recent approaches suggest using large language models (LLMs) to recognize requirements fulfillment in GUIs and automatically propose new GUI components. Based on ten interviews with practitioners, this paper proposes an LLM-based assistant as a Figma plug-in that bridges the gap between user stories and GUI prototyping. We evaluated the prototype with 40 users and 40 crowd-workers, showing that the effectiveness of GUI creation is improved by using LLMs to detect requirements' completion and generate new GUI components. We derive design rationales to support cross-functional integration in software development, ensuring that our plug-in integrates well into established processes.
Marine science researchers are heavy users of software tools and systems such as statistics packages, visualization tools, and online data catalogues. Following a constructivist grounded theory approach, we conduct a semi-structured interview study of 23 marine science researchers and research supports within a North American university, to understand their perceptions of and approaches towards using both graphical and code-based software tools and systems. We propose the concept of fragmentation to represent how various factors lead to isolated pockets of views and practices concerning software tool use during the research process. These factors include informal learning of tools, preferences towards doing things from scratch, and a push towards more code-based tools. Based on our findings, we suggest design priorities for user interfaces that could more effectively help support marine scientists make and use software tools and systems.
Large Language Model (LLM)-based in-application assistants, or copilots, can automate software tasks, but users often prefer learning by doing, raising questions about the optimal level of automation for an effective user experience. We investigated two automation paradigms by designing and implementing a fully automated copilot (AutoCopilot) and a semi-automated copilot (GuidedCopilot) that automates trivial steps while offering step-by-step visual guidance. In a user study (N=20) across data analysis and visual design tasks, GuidedCopilot outperformed AutoCopilot in user control, software utility, and learnability, especially for exploratory and creative tasks, while AutoCopilot saved time for simpler visual tasks. A follow-up design exploration (N=10) enhanced GuidedCopilot with task-and state-aware features, including in-context preview clips and adaptive instructions. Our findings highlight the critical role of user control and tailored guidance in designing the next generation of copilots that enhance productivity, support diverse skill levels, and foster deeper software engagement.
Generative AI models are increasingly being integrated into human task workflows, enabling the production of expressive content across a wide range of contexts. Unlike traditional human-AI design methods, the new approach to designing generative capabilities focuses heavily on prompt engineering strategies. This shift requires a deeper understanding of how collaborative software teams establish and apply design guidelines, iteratively prototype prompts, and evaluate them to achieve specific outcomes. To explore these dynamics, we conducted design studies with 39 industry professionals, including UX designers, AI engineers, and product managers. Our findings highlight emerging practices and role shifts in AI system prototyping among multistakeholder teams. We observe various prompting and prototyping strategies, highlighting the pivotal role of to-be-generated content characteristics in enabling rapid, iterative prototyping with generative AI. By identifying associated challenges, such as the limited model interpretability and overfitting the design to specific example content, we outline considerations for generative AI prototyping.
This paper presents findings from a thinking-aloud protocol exploring mental models in 28 elementary school math teachers during their initial attempt at composing and testing trigger-action rules for a smart tangible educational device. In the study, two sets of event-driven primitives were implemented in an End-User Development platform for guiding teachers with no programming experience in defining new functions of the device: "concrete", based on actual actions performed on the device, and "abstract", based on general definitions of events/actions. With a thematic analysis, we identified three different metaphors that drive participants' interaction with the device. We discuss how the metaphors influenced performance and how the order of exposition to the two primitive sets impacted their grasping of the trigger-action logic. Our findings suggest the importance of guiding teachers in assuming effective metaphors for performing End-User Development tasks, to empower them to adopt an active role toward digital devices in education.
While current chat-based AI assistants primarily operate reactively, responding only when prompted by users, there is significant potential for these systems to proactively assist in tasks without explicit invocation, enabling a mixed-initiative interaction. This work explores the design and implementation of proactive AI assistants powered by large language models. We first outline the key design considerations for building effective proactive assistants. As a case study, we propose a proactive chat-based programming assistant that automatically provides suggestions and facilitates their integration into the programmer's code. The programming context provides a shared workspace enabling the assistant to offer more relevant suggestions. We conducted a randomized experimental study examining the impact of various design elements of the proactive assistant on programmer productivity and user experience. Our findings reveal significant benefits of incorporating proactive chat assistants into coding environments, while also uncovering important nuances that influence their usage and effectiveness.
Tools to inspect runtime state, like print statements and debuggers, are an essential part of programming. Yet, a major limitation is that they present data at a fixed, low level of abstraction which can overload the user with irrelevant details. In contrast, human drawings of data structures use many illustrative visual abstractions to show the most useful information. We attempt to bridge the gap by surveying 80 programmer-produced diagrams to develop a mechanical approach for capturing visual abstraction, termed abstraction moves. An abstraction move selects data objects of interest, and then revisualizes, simplifies, or annotates them. We implement these moves as a diagramming language for JavaScript code, named Chisel, and show that it can effectively reproduce 78 out of the 80 surveyed diagrams. In a preliminary study with four CS educators, we evaluate its usage and discover potential contexts of use. Our approach of mechanically moving between levels of abstraction in data displays opens the doors to new tools and workflows in programming education and software development.