Agent-Supported Foresight for AI Systemic Risks: AI Agents for Breadth, Experts for Judgment

要旨

AI impact assessments often stress near-term risks because human judgment degrades over longer horizons, exemplifying the Collingridge dilemma: foresight is most needed when knowledge is scarcest. To address long-term systemic risks, we introduce a scalable approach that simulates in-silico agents using the foresight method of the Futures Wheel. We applied it to four AI uses spanning Technology Readiness Levels (TRLs): Chatbot Companion (TRL 9), AI Toy (TRL 7), Griefbot (TRL 5), and Death App (TRL 2). Across 30 agent runs per use, agents produced 86–110 consequences, condensed into 27–47 unique risks. To benchmark the agent outputs against human perspectives, we collected evaluations from 290 domain experts and 7 leaders, and conducted Futures Wheel sessions with 42 experts and 42 laypeople. Agents generated many systemic consequences. Compared with these outputs, experts identified fewer risks, typically less systemic but judged more likely, whereas laypeople surfaced more emotionally salient concerns that were generally less systemic. We propose a hybrid foresight workflow, wherein agents broaden systemic coverage, and humans provide contextual grounding.

著者
Leon Fröhling
Nokia Bell Labs, Cambridge, United Kingdom
Alessandro Giaconia
Nokia Bell Labs, Cambridge, United Kingdom
Edyta Paulina. Bogucka
Nokia Bell Labs, Cambridge, Cambridgeshire, United Kingdom
Daniele Quercia
Nokia Bell Labs, Cambridge, United Kingdom

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI Risks

P1 - Room 112
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00