Language Models (LMs) are increasingly used for a wide range of prediction tasks, but their training can often neglect rare edge cases, reducing their reliability. Here, we define a stringent standard of trustworthiness whereby the task algorithm and circuit implementation must be verified, accounting for edge cases, with no known failure modes. We show that a transformer model can be trained to meet this standard if built using mathematically and logically specified frameworks. In this paper, we fully verify a model for n-digit integer addition. To exhibit the reusability of verified modules, we insert the trained integer addition model into an untrained model and train the combined model to perform both addition and subtraction. We find extensive reuse of the addition circuits for both tasks, easing verification of the more complex subtractor model. We discuss how inserting verified task modules into LMs can leverage model reuse to improve verifiability and trustworthiness of language models built using them. The reuse of verified circuits reduces the effort to verify more complex composite models which we believe to be a significant step towards safety of language models.
翻译:语言模型(LMs)被日益广泛地应用于各类预测任务,但其训练过程常忽略罕见边缘案例,导致可靠性降低。本文定义了一种严格的信任标准:必须验证任务算法和电路实现,涵盖所有边缘案例且无已知失效模式。研究表明,若基于数学和逻辑精确定义的框架构建,可训练Transformer模型达到该标准。我们完整验证了一个n位整数加法模型。为展示已验证模块的可重用性,将训练好的整数加法模型嵌入未训练模型,并通过联合训练使其同时执行加法和减法任务。实验发现,加法电路在两个任务中被广泛重用,从而简化了更复杂减法器模型的验证工作。我们探讨了将已验证任务模块嵌入语言模型如何通过模型复用来提升其可验证性与可信度。已验证电路的重用能降低复杂复合模型的验证成本,这被认为是迈向语言模型安全性的重要一步。