The rapid integration of conversational AI systems into educational settings has intensified ethical concerns about academic integrity, fairness, and students' cognitive development. Institutional responses have largely centered on AI detection tools and restrictive policies, yet such approaches have proven unreliable and ethically contentious. This paper reframes AI misuse in education not primarily as a detection problem, but as a measurement problem rooted in the loss of visibility into the learning process. When AI enters the assessment loop, educators often retain access to final outputs but lose valuable insight into how those outputs were produced. Drawing on research in cognitive offloading, learning analytics, and multimodal timeline reconstruction, we propose the Learning Visibility Framework, grounded in three principles: clear specification and modeling of acceptable AI use, recognition of learning processes as assessable evidence alongside outcomes, and the establishment of transparent timelines of student activity. Rather than promoting surveillance, the framework emphasizes transparency and shared evidence as foundations for ethical AI integration in classroom settings. By shifting focus from adversarial detection toward process visibility, this work offers a principled pathway for aligning AI use with educational values while preserving trust and transparency between students and educators
翻译:会话式人工智能系统在教育场景中的快速融合加剧了关于学术诚信、公平性和学生认知发展的伦理担忧。机构的应对措施主要集中于人工智能检测工具和限制性政策,然而此类方法已被证明不可靠且在伦理上存在争议。本文重新构建了教育中的人工智能滥用问题——其本质上并非检测问题,而是源于学习过程可见性丧失的测量问题。当人工智能进入评估循环时,教育者通常仍能获取最终输出,却失去了关于这些输出如何产生的重要洞察。借鉴认知卸载、学习分析和多模态时间线重建领域的研究,我们提出了基于三项原则的学习可见性框架:明确界定并建模可接受的人工智能使用方式;承认学习过程与学习成果同为可评估的证据;建立透明的学生活动时间线。该框架并非提倡监控,而是强调透明度和共享证据作为人工智能在课堂环境中实现伦理融合的基础。通过将焦点从对抗性检测转向过程可见性,本研究为协调人工智能使用与教育价值观提供了一条原则性路径,同时维系了师生间的信任与透明度。