If $A$ and $B$ are sets such that $A \subset B$, generalisation may be understood as the inference from $A$ of a hypothesis sufficient to construct $B$. One might infer any number of hypotheses from $A$, yet only some of those may generalise to $B$. How can one know which are likely to generalise? One strategy is to choose the shortest, equating the ability to compress information with the ability to generalise (a proxy for intelligence). We examine this in the context of a mathematical formalism of enactive cognition. We show that compression is neither necessary nor sufficient to maximise performance (measured in terms of the probability of a hypothesis generalising). We formulate a proxy unrelated to length or simplicity, called weakness. We show that if tasks are uniformly distributed, then there is no choice of proxy that performs at least as well as weakness maximisation in all tasks while performing strictly better in at least one. In experiments comparing maximum weakness and minimum description length in the context of binary arithmetic, the former generalised at between $1.1$ and $5$ times the rate of the latter. We argue this demonstrates that weakness is a far better proxy, and explains why Deepmind's Apperception Engine is able to generalise effectively.
翻译:若 $A$ 和 $B$ 是满足 $A \subset B$ 的集合,泛化可理解为从 $A$ 推断出足以构建 $B$ 的假设。从 $A$ 出发可推断出无数假设,但仅有部分能泛化到 $B$。如何确定哪些假设可能实现泛化?一种策略是选择最短的假设,将信息压缩能力等同于泛化能力(智能的代理指标)。我们在具身认知的数学形式化框架中对此进行了考察。研究表明,压缩既非最大化性能(以假设泛化概率衡量)的必要条件,亦非其充分条件。我们提出一种与长度或简洁性无关的代理指标——"弱度"。证明表明,若任务均匀分布,则不存在任何代理指标能在所有任务中表现不劣于弱度最大化,同时在至少一项任务中严格优于后者。在二进制算术实验中,对比最大弱度与最小描述长度两种策略,前者的泛化速率是后者的 $1.1$ 至 $5$ 倍。我们据此论证弱度是远优于压缩的代理指标,并解释了 DeepMind 的"统觉引擎"为何能实现高效泛化。