Large language models promise a broad set of functions, but when not given a specific objective, they default to generic results. We demonstrate that inferring the user's in-the-moment objective, then rapidly optimizing for that singular objective, enables LLMs to produce specialized tools, interfaces, and responses. Our work introduces just-in-time objectives, which model a user's goals to specialize LLM systems on the fly. We contribute an architecture for automatically inducing such objectives by passively observing user behavior, then steering downstream AI systems through generation and evaluation against this objective. Inducing just-in-time objectives (e.g., “Clarify the abstract’s research contribution”) enables automatic generation of tools, e.g., those that critique a draft based on relevant HCI methodologies, anticipate related researchers' reactions, or surface ambiguous terminology. In a series of experiments on participants' own tasks, JIT objectives enable LLM outputs that achieve 66–86% win rates over typical LLMs. In-person use sessions confirm that JIT objectives produce specialized tools that are unique to each participant and are rated as significantly higher quality than a standard LLM chat tool.
ACM CHI Conference on Human Factors in Computing Systems