Data leakage affected 294 published papers across 17 scientific fields (Kapoor & Narayanan, 2023); a living survey has since grown that count to 648 across 30 fields. The dominant response has been documentation: checklists, linters, best-practice guides. Documentation reduces errors but does not close structural failures. This paper proposes a structural remedy: a grammar that decomposes the supervised learning lifecycle into 8 kernel primitives connected by a typed directed acyclic graph (DAG), with four hard constraints that reject the two most damaging leakage classes at call time. The grammar's core contribution is the terminal assess constraint: a runtime-enforced evaluate/assess boundary where repeated test-set assessment is rejected by a guard on a nominally distinct Evidence type. A companion study across 2,047 experimental instances quantifies why this matters: selection leakage inflates performance by d_z = 0.93 and memorization leakage by d_z = 0.53-1.11. Two maintained implementations (Python, R) demonstrate the claims. The appendix specification lets anyone build a conforming version.
翻译:数据泄露影响了17个科学领域的294篇已发表论文(Kapoor & Narayanan, 2023);一项持续更新的调查显示,该数字已增至30个领域的648篇。主要的应对措施是文档化:检查清单、代码检查工具、最佳实践指南。文档化能减少错误,但无法消除结构性缺陷。本文提出一种结构性解决方案:一种语法,将监督学习生命周期分解为8个核心原语,并通过类型化有向无环图(DAG)连接,其中包含四项硬约束,可在调用时拒绝两类最具破坏性的泄露。该语法的核心贡献是终端评估约束:一种运行时强制的评估/验证边界,通过名义上独立的证据类型守卫,拒绝重复的测试集评估。一项涵盖2,047个实验实例的伴随研究量化了其重要性:选择泄露使性能膨胀d_z = 0.93,记忆泄露使性能膨胀d_z = 0.53-1.11。两个持续维护的实现(Python, R)验证了上述主张。附录规范允许任何人构建符合该语法的版本。