In early-stage industrial design, teams generate essential but fragile process knowledge—semantic tags, sketches, exploration paths—that is rarely captured or reused but which may be useful at latter design stages, and AI could be used for this purpose. Yet existing AI creativity tools remain outcome-oriented, offering limited support for preserving, tracing, or recombining underlying reasoning. Our formative study (N=6) revealed persistent challenges in team–AI ideation across sessions and collaborators, including semantic–visual fragmentation, context loss, and cross-tool disruption. These insights inspired CoNode, a two-layer system that embeds AI nodes within a shared whiteboard through triplet workflows and augments them with workflow-level consolidation, reuse, and recombination via the CoSense module. We conducted a two-stage evaluation: User Study I (N=12) validates CoNode’s foundational interaction paradigm layer, and User Study II (N=30) evaluates its process-oriented knowledge layer. Results show that CoNode significantly improves knowledge consolidation, reuse, and recombination, effectively facilitating the collaborative processes and demonstrating how generative AI can evolve process knowledge across collaborative rounds.
While the proliferation of foundation models has significantly boosted individual productivity, it also introduces a potential challenge: the homogenization of creative content. In response, we revisit Design-by-Analogy (DbA), a cognitively grounded approach that fosters novel solutions by mapping inspiration across domains. However, prevailing perspectives often restrict DbA to early ideation or specific data modalities, while reducing AI-driven design to simplified input–output pipelines. Such conceptual limitations inadvertently foster widespread design fixation. To address this, we expand the understanding of DbA by embedding it into the entire creative process, thereby demonstrating its capacity to mitigate such fixation. Through a systematic review of 85 studies, we identify six forms of representation and classify techniques across seven stages of the creative process. We further discuss three major application domains: creative industries, intelligent manufacturing, and education and services, demonstrating DbA’s practical relevance. Building on this synthesis, we frame DbA as a mediating technology for human-AI collaboration and outline the potential opportunities and inherent risks for advancing creativity support in HCI and design research.
The use of AI in product design during early creative phases raises questions about its long-term consequences. Concerns are that extended AI use might inhibit creative cognitive processes, especially in novice designers. The aim of this study is to contribute to ongoing research in creative cognition and creative support tools such as AI in design. We conducted an exploratory study with 61 undergraduate students to analyze design exploration in sketching versus AI concept generation. The results indicate that AI groups produced a higher quantity and variation of total ideas (including text-based ideas), while sketch groups generated more image-based ideas. It was inconclusive whether the final image concepts from both AI and sketch groups were more creative. Additionally, homogenization effects were observed in the AI groups. Moreover, while the evolution of the design intent was evident in students who sketched, the focus in AI groups appeared to shift towards the tool (AI), which we analyzed as different design space exploration (DSE) prompting styles.
Advances in Generative AI (GenAI) enable unexpected creation in visual images. In fashion design, this capability has intensified demand for creativity support tools where fast-paced trends challenge fixation and drive exploration of novel creative directions.
While prior work has explored interfaces that align designer intent with GenAI outputs, we still lack an empirical understanding of how fashion designers define, seek, and utilize AI-generated surprise as a valuable resource and actionable design direction rather than random noise.
We address this gap through a qualitative study combining semi-structured interviews with 20 fashion professionals and a design workshop with 12 graduate students. We conceptualized surprise as a strategy that can be designed into GenAI-powered visualization tools to support traceable exploration, contextual grounding, and controllable variation across ideation stages.
This work (1) reframes surprise as a designable mechanism or resource for co-creative interaction, (2) provides empirical insights into how fashion designers can utilize AI-generated surprise in the early stage of design, and (3) translates these insights into actionable guidance for building GenAI-driven visualization tools for fashion and related creative domains from a human-centered AI perspective.
The integration of generative artificial intelligence (GenAI) into design processes raises fundamental questions about behavioral patterns in human-GenAI interaction. This study examines how 16 professional designers interact with GenAI tools during concept development through a mixed-methods approach including pre/post-task questionnaires, video-based behavior analysis, and digital interaction tracking. Results reveal a critical distinction between reflective usage modes and creative modes, with differentiated cognitive impacts. Analysis of communication loops shows significant correlations between interaction difficulties and final design output quality. Three distinct clusters emerge: designers with fluid, problematic, and adaptive interaction patterns. By providing a methodological framework for evaluating GenAI tool effectiveness in design practice, this research contributes to theoretical understanding of behavioral processes in human-GenAI co-creation. Findings reveal specific strategies and workflow adaptations that optimize designer-GenAI collaboration, informing both design methodology and human-computer interaction practice.
Conceptual CAD requires transforming functional requirements into parametric 3D models, yet existing systems have steep learning curves and limit creativity through premature fixation. Generative AI shows promise in producing diverse alternatives, while current methods mainly reconstruct CAD modeling sequences of existing designs, making them unsuitable for early stages where ideas are vague and intent is difficult to express. We present Req2CAD, an interactive system that enables designers to progress from design problems toward conceptual CAD models through functional decomposition, function–structure reasoning, and component-level CAD creation and iteration. Req2CAD introduces a data annotation pipeline that maps functional requirements to the 3D structural design space, a dual-feature CAD representation to support design space exploration and CAD ideation, and a progressive CAD generation method that enables rapid CAD model creation through multi-modal intent expression. A technical evaluation and user study demonstrate the effectiveness of Req2CAD, highlighting its potential for human–AI co-creation.
As generative AI tools become embedded in creative practice, questions of ownership in co-creative contexts are pressing. Yet studies of human-AI collaboration often invoke "ownership" without definition: sometimes conflating it with other concepts, and other times leaving interpretation to participants. This inconsistency makes findings difficult to compare across or even within studies. We introduce a framework of creative ownership comprising three dimensions - Person, Process, and System - each with three subdimensions, offering a shared language for both system design and HCI research. In semi-structured interviews with 21 creative professionals, we found that participants’ initial references to ownership (e.g., embodiment, control, concept) were fully encompassed by the framework, demonstrating its coverage. Once introduced, however, they also articulated and prioritized the remaining subdimensions, underscoring how the framework expands reflection and enables richer insights. Our contributions include 1) the framework, 2) a web-based visualization tool, and 3) empirical findings on its utility.