In this paper, we study how different Reddit communities discuss generative AI in high school education, focusing on learning, academic integrity, AI detection, and emotional framing. Using 3,789 posts from five education-related subreddits, we compare student, teacher, and mixed communities using a pipeline that combines keyword retrieval, human-validated relevance filtering, LLM-assisted annotation, and statistical tests of group differences. We find that stakeholder position strongly shapes discourse: teachers are more likely to articulate explicit pedagogical trade-offs, simultaneously framing AI as both beneficial and harmful for learning, whereas students more often discuss AI tactically in relation to accusations, grades, and enforcement. Across all groups, detector-related discourse is associated with significantly higher negative emotion, with larger effects for students and mixed communities than for teachers. These results suggest that AI detectors function not only as contested technical tools but also as governance mechanisms that impose asymmetric emotional burdens on those subject to institutional enforcement. Finally, we argue that detection-based enforcement should not serve as a primary academic-integrity strategy and that process-based assessment offers a fairer alternative for verifying authorship in AI-mediated classrooms.
翻译:本文研究不同Reddit社群如何讨论高中教育中的生成式AI,重点关注学习、学术诚信、AI检测及情感框架。我们基于来自五个教育相关子版块的3,789篇帖子,通过结合关键词检索、人工验证相关性过滤、大语言模型辅助标注及群体差异统计检验的分析流程,对学生、教师及混合社群进行比较。研究发现利益相关者立场深刻塑造了话语模式:教师更倾向于阐明明确的教学权衡,同时将AI视为对学习既有益又有害的工具;而学生则更多从战术层面讨论AI,涉及指控、成绩评定及惩戒执行。在所有群体中,与检测相关的话语均与显著更高的负面情感相关联,且对学生及混合社群的影响效应大于教师群体。这些结果表明,AI检测器不仅是存在争议的技术工具,更是治理机制——对受制度约束者施加不对称的情感负担。最后,我们主张不应将基于检测的惩戒作为首要学术诚信策略,并认为过程性评估为AI介入的课堂中的作者身份验证提供了更公平的替代方案。