Idealised as universal approximators, learners such as neural networks can be viewed as "variable functions" that may become one of a range of concrete functions after training. In the same way that equations constrain the possible values of variables in algebra, we may view objective functions as constraints on the behaviour of learners. We extract the equivalences perfectly optimised objective functions impose, calling them "tasks". For these tasks, we develop a formal graphical language that allows us to: (1) separate the core tasks of a behaviour from its implementation details; (2) reason about and design behaviours model-agnostically; and (3) simply describe and unify approaches in machine learning across domains. As proof-of-concept, we design a novel task that enables converting classifiers into generative models we call "manipulators", which we implement by directly translating task specifications into code. The resulting models exhibit capabilities such as style transfer and interpretable latent-space editing, without the need for custom architectures, adversarial training or random sampling. We formally relate the behaviour of manipulators to GANs, and empirically demonstrate their competitive performance with VAEs. We report on experiments across vision and language domains aiming to characterise manipulators as approximate Bayesian inversions of discriminative classifiers.
翻译:理想化地作为通用逼近器,诸如神经网络的学习器可被视为"变量函数",在训练后可能成为一系列具体函数之一。正如方程约束代数中变量的可能取值,我们可以将目标函数视为对学习器行为的约束。我们提取经完美优化的目标函数所施加的等价关系,称之为"任务"。针对这些任务,我们开发了一种形式化的图形语言,使我们能够:(1) 将行为核心任务与其实现细节分离;(2) 以模型无关的方式推理和设计行为;(3) 简洁描述并统一跨领域的机器学习方法。作为概念验证,我们设计了一种新颖任务,能够将分类器转换为生成模型,我们称之为"操纵器"。我们通过将任务规范直接翻译为代码来实现该模型。所得模型展现出风格迁移和可解释潜在空间编辑等能力,无需定制架构、对抗训练或随机采样。我们形式化地建立了操纵器与生成对抗网络(GANs)行为的理论关联,并通过实验证明其与变分自编码器(VAEs)的竞争性能。我们报告了跨视觉与语言领域的实验,旨在将操纵器表征为判别式分类器的近似贝叶斯逆映射。