GenColor: Generative Color-Concept Association in Visual Design

要旨

Existing approaches for color-concept association typically rely on query-based image referencing, and color extraction from image references. However, these approaches are effective only for common concepts, and are vulnerable to unstable image referencing and varying image conditions. Our formative study with designers underscores the need for primary-accent color compositions and context-dependent colors (e.g., 'clear' vs. 'polluted' sky) in design. In response, we introduce a generative approach for mining semantically resonant colors leveraging images generated by text-to-image models. Our insight is that contemporary text-to-image models can resemble visual patterns from large-scale real-world data. The framework comprises three stages: concept instancing produces generative samples using diffusion models, text-guided image segmentation identifies concept-relevant regions within the image, and color association extracts primarily accompanied by accent colors. Quantitative comparisons with expert designs validate our approach's effectiveness, and we demonstrate the applicability through cases in various design scenarios and a gallery.

著者
Yihan Hou
The Hong Kong University of Science and Technology, Guangzhou, Guangzhou, China
Xingchen Zeng
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Yusong Wang
CMA, Guangzhou, China
Manling YANG
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Xiaojiao Chen
Zhejiang University, Hangzhou, Zhejiang, China
Wei Zeng
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
DOI

10.1145/3706598.3713418

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713418

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Image and AI

G303
7 件の発表
2025-04-28 23:10:00
2025-04-29 00:40:00
日本語まとめ
読み込み中…