UI/UX Design

会議の名前
CHI 2025
Understanding Socio-technical Factors Configuring AI Non-Use in UX Work Practices
要旨

AI tools are often promoted as revolutionary for streamlining labor- and cost-intensive UX workflows. Although their actual adoption and usage are more complex and nuanced than often portrayed, instances, where AI may be unnecessary or even undesirable, are frequently overlooked. Therefore, we aim to gain deeper insights into technology non-use—viewed not merely as a binary opposite to use but as a spectrum of practices. Through semi-structured interviews with 15 UX practitioners, we identified factors influencing non-use across individual, professional, organizational, and societal dimensions. We use a sociotechnical assemblage lens to explore how multiple layers of an individual’s context interact within professional settings, how diverse politics intersect within individuals or organizations, and how these interactions evolve over time. We propose implications for rethinking AI application design and evaluation, for considering policy frameworks and AI design together, and deliberating about where AI should and should not be used.

著者
Inha Cha
Georgia Institute of Technology, Atlanta, Georgia, United States
Richmond Y.. Wong
Georgia Institute of Technology, Atlanta, Georgia, United States
DOI

10.1145/3706598.3713140

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713140

動画
Beyond Automation: How Designers Perceive AI as a Creative Partner in the Divergent Thinking Stages of UI/UX Design
要旨

Divergent thinking activities, like research and ideation, are key drivers of innovation in UI/UX design. Existing research has explored AI's role in automating design tasks, but leaves a critical gap in understanding how AI specifically influences divergent thinking. To address this, we conducted interviews with 19 professional UI/UX designers, examining their use and perception of AI in these creative activities. We found that in this context, participants valued AI tools that offer greater control over ideation, facilitate collaboration, enhance efficiency to liberate creativity, and align with their visual habits. Our results indicated four key roles AI plays in supporting divergent thinking: aiding research, kick-starting creativity, generating design alternatives, and facilitating prototype exploration. Through this study, we provide insights into the evolving role of AI in the less-investigated area of divergent thinking in UI/UX design, offering recommendations for future AI tools that better support design innovation.

著者
Abidullah Khan
Polytechnique Montreal, Montréal, Quebec, Canada
َAtefeh Shokrizadeh
Polytechnique Montreal, Montreal, Quebec, Canada
Jinghui Cheng
Polytechnique Montreal, Montreal, Quebec, Canada
DOI

10.1145/3706598.3713500

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713500

動画
Dancing With Chains: Ideating Under Constraints With UIDEC in UI/UX Design
要旨

UI/UX designers often work under constraints like brand identity, design norms, and industry guidelines. How these constraints impact designers' ideation and exploration processes should be addressed in creativity-support tools for design. Through an exploratory interview study, we identified three designer personas with varying views on having constraints in the ideation process, which guided the creation of UIDEC, a GenAI-powered tool for supporting creativity under constraints. UIDEC allows designers to specify project details, such as purpose, target audience, industry, and design styles, based on which it generates diverse design examples that adhere to these constraints, with minimal need to write prompts. In a user evaluation involving designers representing the identified personas, participants found UIDEC compatible with their existing ideation process and useful for creative inspiration, especially when starting new projects. Our work provides design implications to AI-powered tools that integrate constraints during UI/UX design ideation to support creativity.

著者
َAtefeh Shokrizadeh
Polytechnique Montreal, Montreal, Quebec, Canada
Boniface Bahati Tadjuidje
Polytechnique Montreal, Montreal, Quebec, Canada
Shivam Kumar
Polytechnique Montreal, Montreal, Quebec, Canada
Sohan Kamble
Polytechnique Montreal, Montreal, Quebec, Canada
Jinghui Cheng
Polytechnique Montreal, Montreal, Quebec, Canada
DOI

10.1145/3706598.3713785

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713785

動画
Misty: UI Prototyping Through Interactive Conceptual Blending
要旨

UI prototyping often involves iterating and blending elements from examples such as screenshots and sketches, but current tools offer limited support for incorporating these examples. Inspired by the cognitive process of conceptual blending, we introduce a novel UI workflow that allows developers to rapidly incorporate diverse aspects from design examples into work-in-progress UIs. We prototyped this workflow as Misty. Through a exploratory first-use study with 14 frontend developers, we assessed Misty's effectiveness and gathered feedback on this workflow. Our findings suggest that Misty's conceptual blending workflow helps developers kickstart creative explorations, flexibly specify intent in different stages of prototyping, and inspires developers through serendipitous UI blends. Misty demonstrates the potential for tools that blur the boundaries between developers and designers.

著者
Yuwen Lu
University of Notre Dame, Notre Dame, Indiana, United States
Alan Leung
Apple, Seattle, Washington, United States
Amanda Swearngin
Apple, Seattle, Washington, United States
Jeffrey Nichols
Apple Inc, San Diego, California, United States
Titus Barik
Apple, Seattle, Washington, United States
DOI

10.1145/3706598.3713924

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713924

動画
GazeSwipe: Enhancing Mobile Touchscreen Reachability through Seamless Gaze and Finger-Swipe Integration
要旨

Smartphones with large screens provide users with increased display and interaction space but pose challenges in reaching certain areas with the thumb when using the device with one hand. To address this, we introduce GazeSwipe, a multimodal interaction technique that combines eye gaze with finger-swipe gestures, enabling intuitive and low-friction reach on mobile touchscreens. Specifically, we design a gaze estimation method that eliminates the need for explicit gaze calibration. Our approach also avoids the use of additional eye-tracking hardware by leveraging the smartphone's built-in front-facing camera. Considering the potential decrease in gaze accuracy without dedicated eye trackers, we use finger-swipe gestures to compensate for any inaccuracies in gaze estimation. Additionally, we introduce a user-unaware auto-calibration method that improves gaze accuracy during interaction. Through extensive experiments on smartphones and tablets, we compare our technique with various methods for touchscreen reachability and evaluate the performance of our auto-calibration strategy. The results demonstrate that our method achieves high success rates and is preferred by users. The findings also validate the effectiveness of the auto-calibration strategy.

受賞
Best Paper
著者
Zhuojiang Cai
Beihang University, Beijing, China
Jingkai Hong
Beihang University, Beijing, China
Zhimin Wang
Beihang University, Beijing, China
Feng Lu
Beihang University, Beijing, China
DOI

10.1145/3706598.3713739

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713739

動画
AI-Instruments: Embodying Prompts as Instruments to Abstract & Reflect Graphical Interface Commands as General-Purpose Tools
要旨

Chat-based prompts respond with verbose linear-sequential texts, making it difficult to explore and refine ambiguous intents, back up and reinterpret, or shift directions in creative AI-assisted design work. AI-Instruments instead embody "prompts" as interface objects via three key principles: (1) Reification of user-intent as reusable direct-manipulation instruments; (2) Reflection of multiple interpretations of ambiguous user-intents (Reflection-in-intent) as well as the range of AI-model responses (Reflection-in-response) to inform design "moves" towards a desired result; and (3) Grounding to instantiate an instrument from an example, result, or extrapolation directly from another instrument. Further, AI-Instruments leverage LLM’s to suggest, vary, and refine new instruments, enabling a system that goes beyond hard-coded functionality by generating its own instrumental controls from content. We demonstrate four technology probes, applied to image generation, and qualitative insights from twelve participants, showing how AI-Instruments address challenges of intent formulation, steering via direct manipulation, and non-linear iterative workflows to reflect and resolve ambiguous intents.

受賞
Honorable Mention
著者
Nathalie Riche
Microsoft Research, Redmond, Washington, United States
Anna Offenwanger
Microsoft Research, Redmond, Washington, United States
Frederic Gmeiner
Microsoft Research, Redmond, Washington, United States
David Brown
Microsoft Research, Redmond, Washington, United States
Hugo Romat
Microsoft, Seattle, Washington, United States
Michel Pahud
Microsoft Research, Redmond, Washington, United States
Nicolai Marquardt
Microsoft Research, Redmond, Washington, United States
Kori Inkpen
Microsoft, Redmond, Washington, United States
Ken Hinckley
Microsoft Research, Redmond, Washington, United States
DOI

10.1145/3706598.3714259

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714259

動画
Persona-L has Entered the Chat: Leveraging LLMs and Ability-based Framework for Personas of People with Complex Needs
要旨

We present Persona-L, a novel approach for creating personas using Large Language Models (LLMs) and an ability-based framework, specifically designed to improve the representation of people with complex needs. Traditional methods of persona creation often fall short of accurately depicting the dynamic and diverse nature of complex needs, resulting in oversimplified or stereotypical profiles. Persona-L enables users to create and interact with personas through a chat interface. Persona-L was evaluated through interviews with UX designers (N=6), where we examined its effectiveness in reflecting the complexities of lived experiences of people with complex needs. We report our findings that indicate the potential of Persona-L to increase empathy and understanding of complex needs while also revealing the need for transparency of data used in persona creation, the role of the language and tone, and the need to provide a more balanced presentation of abilities with constraints.

著者
Lipeipei Sun
Northeastern University, Seattle, Washington, United States
Tianzi Qin
Northeastern University, Vancouver, British Columbia, Canada
Anran Hu
Northeastern University, Vancouver, British Columbia, Canada
Jiale Zhang
Northeastern University, Vancouver, British Columbia, Canada
Shuojia Lin
Northeastern University, Vancouver, British Columbia, Canada
Jianyan Chen
Northeastern University, Vancouver, British Columbia, Canada
Mona Ali
Suez Canal University, Ismailia, Egypt
Mirjana Prpa
Northeastern University, Vancouver, British Columbia, Canada
DOI

10.1145/3706598.3713445

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713445

動画