Some of the tightest information-theoretic generalization bounds depend on the average information between the learned hypothesis and a \emph{single} training example. However, these sample-wise bounds were derived only for \emph{expected} generalization gap. We show that even for expected \emph{squared} generalization gap no such sample-wise information-theoretic bounds exist. The same is true for PAC-Bayes and single-draw bounds. Remarkably, PAC-Bayes, single-draw and expected squared generalization gap bounds that depend on information in pairs of examples exist.
翻译:某些最严格的信息-理论概括性界限取决于所学假设与\emph{sing}培训示例之间的平均信息。 然而,这些样本- 边框只针对\ emph{ 预料的} 一般化差距。 我们显示,即使预期的\emph{squred} 普遍性差距也没有存在这种样本- 信息- 理论界限。 PAC- Bayes 和单拖网界限也是如此。 值得注意的是, PAC- Bayes 、 单拖网和预期的平面宽幅界限都存在, 取决于一对实例中的信息。