GANs promise indistinguishability, logic explains it. We put the two on a budget: a discriminator that can only ``see'' up to a logical depth $k$, and a generator that must look correct to that bounded observer. \textbf{LOGAN} (LOGical GANs) casts the discriminator as a depth-$k$ Ehrenfeucht--Fra\"iss\'e (EF) \emph{Opponent} that searches for small, legible faults (odd cycles, nonplanar crossings, directed bridges), while the generator plays \emph{Builder}, producing samples that admit a $k$-round matching to a target theory $T$. We ship a minimal toolkit -- an EF-probe simulator and MSO-style graph checkers -- and four experiments including real neural GAN training with PyTorch. Beyond verification, we score samples with a \emph{logical loss} that mixes budgeted EF round-resilience with cheap certificate terms, enabling a practical curriculum on depth. Framework validation demonstrates $92\%$--$98\%$ property satisfaction via simulation (Exp.~3), while real neural GAN training achieves $5\%$--$14\%$ improvements on challenging properties and $98\%$ satisfaction on connectivity (matching simulation) through adversarial learning (Exp.~4). LOGAN is a compact, reproducible path toward logic-bounded generation with interpretable failures, proven effectiveness (both simulated and real training), and dials for control.
翻译:生成对抗网络(GANs)追求不可区分性,而逻辑学则为其提供解释。本研究将二者置于有限资源框架下:判别器仅能“观察”至逻辑深度$k$,而生成器必须在该有限观察者面前呈现正确样本。\textbf{LOGAN}(逻辑生成对抗网络)将判别器设定为深度-$k$的埃伦费斯特-弗赖塞(EF)博弈\emph{挑战方},专门搜寻微小且可解释的缺陷(如奇环、非平面交叉、有向桥),而生成器则扮演\emph{构建方},生成能与目标理论$T$进行$k$轮匹配的样本。我们提供了最小化工具包——包含EF探针模拟器和MSO风格图检验器,以及四项实验(含基于PyTorch的真实神经GAN训练)。除验证外,我们通过混合有限EF轮次抗扰性与低成本证书项的\emph{逻辑损失函数}评估样本,实现了深度上的实用课程学习。框架验证显示:通过模拟实验(实验3)达到92\%–98\%的属性满足率;真实神经GAN训练在对抗学习(实验4)中,针对挑战性属性实现5\%–14\%的性能提升,连通性属性满足率达98\%(与模拟结果一致)。LOGAN为逻辑有界生成提供了一条紧凑、可复现的路径,其特点包括:可解释的失败模式、经证明的有效性(涵盖模拟与真实训练)以及可控调节机制。