This paper investigates how large language models (LLMs) are reshaping competitive programming. The field functions as an intellectual contest within computer science education and is marked by rapid iteration, real-time feedback, transparent solutions, and strict integrity norms. Prior work has evaluated LLMs performance on contest problems, but little is known about how human stakeholders -- contestants, problem setters, coaches, and platform stewards -- are adapting their workflows and contest norms under LLMs-induced shifts. At the same time, rising AI-assisted misuse and inconsistent governance expose urgent gaps in sustaining fairness and credibility. Drawing on 37 interviews spanning all four roles and a global survey of 207 contestants, as well as an API-based crawl of Codeforces contest logs (2022-2025) for quantitative analysis, we contribute: (i) an empirical account of evolving workflows, (ii) an analysis of contested fairness norms, and (iii) a chess-inspired governance approach with actionable measures -- real-time LLMs checks in online contests, peer co-monitoring and reporting, and cross-validation against offline performance -- to curb LLMs-assisted misuse while preserving fairness, transparency, and credibility.
翻译:本文研究大型语言模型(LLMs)如何重塑竞技编程领域。该领域作为计算机科学教育中的智力竞赛,具有快速迭代、实时反馈、解决方案透明和严格诚信规范的特点。先前的研究评估了LLMs在竞赛题目上的表现,但鲜有研究关注人类参与者——参赛者、出题人、教练和平台管理者——如何在LLMs引发的变革中调整其工作流程和竞赛规范。与此同时,日益增多的AI辅助滥用行为和不一致的治理机制暴露出维持公平性与可信度的迫切需求。基于对四类角色的37次访谈、对207名参赛者的全球性调查,以及通过API爬取的Codeforces竞赛日志(2022-2025年)的定量分析,本文贡献如下:(i)对演进中的工作流程进行实证描述;(ii)对存在争议的公平性规范进行分析;(iii)提出一种受国际象棋启发的治理方案,包含可操作措施——在线竞赛中的实时LLMs检测、同行协同监督与举报机制、以及线下表现交叉验证——以遏制LLMs辅助的滥用行为,同时维护公平性、透明度和可信度。