Language Models (LMs) are increasingly used for a wide range of prediction tasks, but their training can often neglect rare edge cases, reducing their reliability. Here, we define a stringent standard of trustworthiness whereby the task algorithm and circuit implementation must be verified, accounting for edge cases, with no known failure modes. We show that a model can be trained to meet this standard if built using mathematically and logically specified frameworks. In this paper, we fully verify an auto-regressive transformer model for n-digit integer addition. To exhibit the reusability of verified modules, we insert the trained integer addition model into a larger untrained model and train the combined model to perform both addition and subtraction. We find extensive reuse of the addition circuits for both tasks, easing verification of the more complex subtractor model. We discuss how inserting verified task modules into LMs can leverage model reuse to improve verifiability and trustworthiness of language models built using them. The reuse of verified circuits reduces the effort to verify more complex composite models which we believe to be a significant step towards safety of language models.
翻译:语言模型(LMs)正日益广泛地应用于各类预测任务,但其训练过程常忽略罕见的边缘情况,从而降低了模型的可靠性。本文定义了一种严格的可信度标准:任务算法与电路实现必须经过验证,需涵盖边缘情况且不存在已知的失效模式。我们证明,若采用数学与逻辑规范化的框架构建模型,则能够训练出符合该标准的系统。本文完整验证了一个用于n位整数加法的自回归Transformer模型。为展示已验证模块的可复用性,我们将训练完成的整数加法模型嵌入至一个未经训练的更大模型中,并训练该组合模型同时执行加法与减法运算。研究发现加法电路在两项任务中均被大量复用,从而简化了更复杂的减法模型的验证过程。本文探讨了将已验证任务模块嵌入语言模型如何通过模型复用来提升基于此类模块构建的语言模型的可验证性与可信度。已验证电路的复用降低了验证更复杂组合模型的工作量,我们认为这是迈向语言模型安全性的重要一步。