In what ways do generative AI (GenAI) tools embedded in mainstream design software affect UI design practices and outcomes? In this study, we examine the use of FigmaAI, a GenAI feature within the industry-standard platform Figma. We conducted a within-subject study with 16 professional UX designers, each of whom completed two high-fidelity UI design tasks in a think-aloud session: one conventionally, without AI, and one with FigmaAI. We analyse the design process itself alongside experiences and reflections on using and not using AI, gathered from adjacent semi-structured interviews and post-task questionnaires. Our findings suggest that GenAI reshapes UI workflows, shifting them from additive to subtractive processes: designers refine AI drafts rather than building interfaces from scratch. While AI reduces workload and accelerates initial setup, it also constrains exploration, limits perceived ownership, and produces designs that are more visually and structurally similar.
Despite their increasing capabilities, text-to-image generative AI systems are known to produce biased, offensive, and otherwise problematic outputs. While recent advancements have supported testing and auditing of generative AI, existing auditing methods still face challenges in supporting effective exploration of the vast space of AI-generated outputs in a structured way. To address this gap, we conducted formative studies with five AI auditors and synthesized five design goals for supporting systematic AI audits. Based on these insights, we developed Vipera, an interactive auditing interface that employs multiple visual cues, including a scene graph, to facilitate image sensemaking and inspire auditors to explore and hierarchically organize the auditing criteria. Additionally, Vipera leverages LLM-powered suggestions to enable exploration of unexplored auditing directions. Through a controlled experiment with 24 participants experienced in AI auditing, we demonstrate Vipera’s effectiveness in helping auditors navigate large AI output spaces and organize their analyses while engaging with diverse criteria.
The dissemination of scholarly research is critical, yet researchers often lack the time and skills to create engaging content for popular media such as short-form videos. To address this gap, we explore the use of generative AI to help researchers transform their academic papers into accessible video content. Informed by a formative study with science communicators and content creators (N=8), we designed PaperTok, an end-to-end system that automates the initial creative labor by generating script options and corresponding audiovisual content from a source paper. Researchers can then refine based on their preferences with further prompting. A mixed-methods user study (N=18) and crowdsourced evaluation (N=100) demonstrate that PaperTok's workflow can help researchers create engaging and informative short-form videos. We also identified the need for more fine-grained controls in the creation process. To this end, we offer implications for future generative tools that support science outreach.
Web search engines have evolved drastically over the past two decades, transitioning from simple link providers to direct answer providers. AI technologies, particularly generative large language models, have accelerated this shift by embedding conversational and personalized features directly into search systems. As a result, user expectations and approaches to search have fundamentally changed. Despite this evolution, research continues to rely on search intent frameworks from the early web era. In particular, Broder's influential taxonomy still guides much domain research. Given the transformation of web search, we argue that these frameworks need rethinking. We challenge Broder's taxonomy and propose a new, user-centered framework grounded in contemporary practices. Through an innovative survey combining user reflections with interaction histories, we distinguish three intent categories: knowledge-seeking, guidance-seeking, and output-seeking. This taxonomy applies across search engines and GenAI chatbots, offering a flexible lens for understanding information seeking in different search systems.
The integration of Generative AI (GenAI) into Architecture, Engineering, and Construction (AEC) design marks a paradigm shift, yet empirical studies on AEC professional adoption are scarce. This study aims to bridge this gap through a mixed-methods study involving interviews with 20 professionals and a survey of 191 professionals. Findings show GenAI is mainly used in conceptual design, improving client and intra-team collaboration, especially in concept-only projects, strong site integration projects, and challenging client engagements projects for inspiration exploration, client communication and design service boundaries expansion. We further uncover a critical paradox: professionals perceive GenAI’s creativity as intrinsically linked to the unpredictability. Additionally, a nuanced “shame-yet-pride'' dynamic is identified, wherein professionals publicly regard GenAI as a key asset but conceal its use to exaggerate manual effort. Based on these insights, we recommend fostering an industry-wide discussion for a mindset shift toward transparent collaboration with GenAI. This study offers the first comprehensive empirical study for understanding GenAI’s current role and future potential in transforming AEC design practices.
Governments are the primary providers of essential public services and are responsible for delivering them effectively. In high-stakes decision-making domains such as child welfare (CW), agencies must protect children without unnecessarily prolonging a family’s engagement with the system. With growing optimism around AI, governments are pushing for its integration but concerns regarding feasibility and harms remain. Through collaborations with a large Canadian CW agency, we examined how LocalLLM and BERTopic models can track CW case progress. We demonstrate how the tools can potentially assist workers in opportunistically addressing gaps in their work by signaling case progress/deviations. And yet, we also show how they fail to detect case trajectories that require discretionary judgments grounded in social work training, areas where practitioners would actually want support to pre-emptively address substantive case concerns. We also provide a roadmap of future participatory directions to co-design language tools for/with the public sector.
Although organizations increasingly position AI adoption as a pathway to competitiveness and innovation, organizations' perspectives on productivity and efficiency often clash with workers' perspectives on AI's economic and social value. Through design workshops with 15 UX designers, we examine how AI adoption unfolds across individual, team, and organizational scales. At the individual level, designers weighed efficiency, skill development, and professional worth. At the team level, they negotiated collaboration, responsibility, and rigor. At the organizational level, adoption was shaped by compliance requirements and organizational norms. Across these scales, discourses of efficiency carried social and ethical dimensions of responsibility, trust, and autonomy. We view adoption as a site where roles, relationships, and power are reconfigured. We argue that AI adoption should be understood as a process of negotiating values, and call for future work examining how AI systems redistribute responsibility among team members, while understanding how such shifts could strengthen worker agency.