LLMR: Real-time Prompting of Interactive Worlds using Large Language Models

要旨

We present Large Language Model for Mixed Reality (LLMR), a framework for the real-time creation and modification of interactive Mixed Reality experiences using LLMs. LLMR leverages novel strategies to tackle difficult cases where ideal training data is scarce, or where the design goal requires the synthesis of internal dynamics, intuitive analysis, or advanced interactivity. Our framework relies on text interaction and the Unity game engine. By incorporating techniques for scene understanding, task planning, self-debugging, and memory management, LLMR outperforms the standard GPT-4 by 4x in average error rate. We demonstrate LLMR's cross-platform interoperability with several example worlds, and evaluate it on a variety of creation and modification tasks to show that it can produce and edit diverse objects, tools, and scenes. Finally, we conducted a usability study (N=11) with a diverse set that revealed participants had positive experiences with the system and would use it again.

受賞
Honorable Mention
著者
Fernanda De La Torre
MIT, Cambridge, Massachusetts, United States
Cathy Mengying Fang
MIT Media Lab, Cambridge, Massachusetts, United States
Han Huang
Rensselaer Polytechnic Institute, Troy, New York, United States
Andrzej Banburski-Fahey
Microsoft, Redmond, Washington, United States
Judith Amores
Microsoft, Cambridge, Massachusetts, United States
Jaron Lanier
Microsoft, Berkeley, California, United States
論文URL

https://doi.org/10.1145/3613904.3642579

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Immersive Experiences: Creating and Communicating

315
5 件の発表
2024-05-14 20:00:00
2024-05-14 21:20:00