ONYX: Assisting Users in Teaching Natural Language Interfaces Through Multi-Modal Interactive Task Learning

要旨

Users are increasingly empowered to personalize natural language interfaces (NLIs) by teaching how to handle new natural language (NL) inputs. However, our formative study found that when teaching new NL inputs, users require assistance in clarifying ambiguities that arise and want insight into which parts of the input the NLI understands. In this paper we introduce ONYX, an intelligent agent that interactively learns new NL inputs by combining NL programming and programming-by-demonstration, also known as multi-modal interactive task learning. To address the aforementioned challenges, ONYX provides suggestions on how ONYX could handle new NL inputs based on previously learned concepts or user-defined procedures, and poses follow-up questions to clarify ambiguities in user demonstrations, using visual and textual aids to clarify the connections. Our evaluation shows that users provided with ONYX’s new features achieved significantly higher accuracy in teaching new NL inputs (median: 93.3%) in contrast to those without (median: 73.3%).

著者
Marcel Ruoff
Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
Brad A. Myers
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Alexander Maedche
Karlsruhe Institute of Technology (KIT), Karlsruhe, DEUTSCHLAND, Germany
論文URL

https://doi.org/10.1145/3544548.3580964

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Interactive Learning Support Systems

Hall G1
6 件の発表
2023-04-26 23:30:00
2023-04-27 00:55:00