PhotoScout: Synthesis-Powered Multi-Modal Image Search

要旨

Due to the availability of increasingly large amounts of visual data, there is a growing need for tools that can help users find relevant images. While existing tools can perform image retrieval based on similarity or metadata, they fall short in scenarios that necessitate semantic reasoning about the content of the image. This paper explores a new multi-modal image search approach that allows users to conveniently specify and perform semantic image search tasks. With our tool, PhotoScout, the user interactively provides natural language descriptions, positive and negative examples, and object tags to specify their search tasks. Under the hood, PhotoScout is powered by a program synthesis engine that generates visual queries in a domain-specific language and executes the synthesized program to retrieve the desired images. In a study with 25 participants, we observed that PhotoScout allows users to perform image retrieval tasks more accurately and with less manual effort.

著者
Celeste Barnaby
University of Texas at Austin, Austin, Texas, United States
Qiaochu Chen
University of Texas at Austin, Austin, Texas, United States
Chenglong Wang
Microsoft Research, Redmond, Washington, United States
Isil Dillig
University of Texas at Austin, Austin, Texas, United States
論文URL

doi.org/10.1145/3613904.3642319

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Remote Presentations: Highlight on Creative HCI

Remote Sessions
11 件の発表
2024-05-15 18:00:00
2024-05-16 02:20:00