FAIR: Framing AI’s Role in Programming Competitions — Understanding How LLMs Are Changing the Game in Competitive Programming

要旨

This paper investigates how large language models (LLMs) are reshaping competitive programming. The field functions as an intellectual contest within computer science education and is marked by rapid iteration, real-time feedback, transparent solutions, and strict integrity norms. Prior work has evaluated LLMs performance on contest problems, but little is known about how human stakeholders—contestants, problem setters, coaches, and platform stewards—are adapting their workflows and contest norms under LLMs-induced shifts. At the same time, rising AI-assisted misuse and inconsistent governance expose urgent gaps in sustaining fairness and credibility. Drawing on 37 interviews spanning all four roles and a global survey of 207 contestants, as well as an API-based crawl of Codeforces contest logs (2022–2025) for quantitative analysis, we contribute: (i) an empirical account of evolving workflows, (ii) an analysis of contested fairness norms, and (iii) a chess-inspired governance approach with actionable measures—real-time LLMs checks in online contests, peer co-monitoring and reporting, and cross-validation against offline performance—to curb LLMs-assisted misuse while preserving fairness, transparency, and credibility.

著者
Dongyijie Primo. PAN
Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Lan LUO
computational media and arts, Guang Zhou, China
Ji Zhu
Communication University of China, Beijing, China
Zhiqi Gao
The Chinese University of Hong Kong, Shenzhen, Shenzhen, China
Xin Tong
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Pan Hui
The Hong Kong University of Science and Technology, Hong Kong, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Learning, Training, and Self-Development with AI

P1 - Room 125
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00