Brickify: Enabling Expressive Design Intent Specification through Direct Manipulation on Design Tokens

要旨

Expressing design intent using natural language prompts requires designers to verbalize the ambiguous visual details concisely, which can be challenging or even impossible. To address this, we introduce Brickify, a visual-centric interaction paradigm — expressing design intent through direct manipulation on design tokens. Brickify extracts visual elements (e.g., subject, style, and color) from reference images and converts them into interactive and reusable design tokens that can be directly manipulated (e.g., resize, group, link, etc.) to form the visual lexicon. The lexicon reflects users’ intent for both what visual elements are desired and how to construct them into a whole. We developed Brickify to demonstrate how AI models can interpret and execute the visual lexicon through an end-to-end pipeline. In a user study, experienced designers found Brickify more efficient and intuitive than text-based prompts, allowing them to describe visual details, explore alternatives, and refine complex designs with greater ease and control.

著者
Xinyu Shi
University of Waterloo, Waterloo, Ontario, Canada
Yinghou Wang
Harvard University, Cambridge, Massachusetts, United States
Ryan Rossi
Adobe Research, San Jose, California, United States
Jian Zhao
University of Waterloo, Waterloo, Ontario, Canada
DOI

10.1145/3706598.3714087

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714087

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Exploring Physical and Digital Product Design

G418+G419
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…