Automatic Annotation Synchronizing with Textual Description for Visualization

要旨

In this paper, we propose a technique for automatically annotating visualizations according to the textual description. In our approach, visual elements in the target visualization, along with their visual properties, are identified and extracted with a Mask R-CNN model. Meanwhile, the description is parsed to generate visual search requests. Based on the identification results and search requests, each descriptive sentence is displayed beside the described focal areas as annotations. Different sentences are presented in various scenes of the generated animation to promote a vivid step-by-step presentation. With a user-customized style, the animation can guide the audience's attention via proper highlighting such as emphasizing specific features or isolating part of the data. We demonstrate the utility and usability of our method through a user study with use cases.

キーワード
Visualization
Annotation
Natural Language Interface
Machine Learning
著者
Chufan Lai
Peking University, Beijing, China
Zhixian Lin
Peking University, Beijing, China
Ruike Jiang
Peking University, Beijing, China
Yun Han
Peking University, Beijing, China
Can Liu
Peking University, Beijing, China
Xiaoru Yuan
Peking University, Beijing, China
DOI

10.1145/3313831.3376443

論文URL

https://doi.org/10.1145/3313831.3376443

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Talk visually to me

Paper session
316A MAUI
5 件の発表
2020-04-28 20:00:00
2020-04-28 21:15:00
日本語まとめ
読み込み中…