Scene-Aware Behavior Synthesis for Virtual Pets in Mixed Reality

要旨

Virtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.

著者
Wei Liang
Beijing Institute of Technology, Beijing, China
Xinzhe Yu
Beijing Institute of Technology, Beijing, China
Rawan Alghofaili
George Mason University, Fairfax, Virginia, United States
Yining Lang
Alibaba Group, beijing, China
Lap-Fai Yu
George Mason University, Fairfax, Virginia, United States
DOI

10.1145/3411764.3445532

論文URL

https://doi.org/10.1145/3411764.3445532

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Novel Visualization Techniques

[A] Paper Room 09, 2021-05-11 17:00:00~2021-05-11 19:00:00 / [B] Paper Room 09, 2021-05-12 01:00:00~2021-05-12 03:00:00 / [C] Paper Room 09, 2021-05-12 09:00:00~2021-05-12 11:00:00
Paper Room 09
15 件の発表
2021-05-11 17:00:00
2021-05-11 19:00:00
日本語まとめ
読み込み中…