While state-of-the-art language models (LMs) surpass the vast majority of humans in certain domains, their reasoning remains largely opaque, undermining trust in their output. Furthermore, while autoregressive LMs can output explicit reasoning, their true reasoning process is opaque, which introduces risks like deception and hallucination. In this work, we introduce the Prototype Transformer (ProtoT) -- an autoregressive LM architecture based on prototypes (parameter vectors), posed as an alternative to the standard self-attention-based transformers. ProtoT works by means of two-way communication between the input sequence and the prototypes, and we show that this leads to the prototypes automatically capturing nameable concepts (e.g. "woman") during training. They provide the potential to interpret the model's reasoning and allow for targeted edits of its behavior. Furthermore, by design, the prototypes create communication channels that aggregate contextual information at different time scales, aiding interpretability. In terms of computation scalability, ProtoT scales linearly with sequence length vs the quadratic scalability of SOTA self-attention transformers. Compared to baselines, ProtoT scales well with model and data size, and performs well on text generation and downstream tasks (GLUE). ProtoT exhibits robustness to input perturbations on par or better than some baselines, but differs from them by providing interpretable pathways showing how robustness and sensitivity arises. Reaching close to the performance of state-of-the-art architectures, ProtoT paves the way to creating well-performing autoregressive LMs interpretable by design.
翻译:尽管最先进的语言模型(LMs)在某些领域超越了绝大多数人类,但其推理过程在很大程度上仍不透明,这削弱了对其输出的信任。此外,虽然自回归语言模型能够输出显式推理,但其真实的推理过程是隐晦的,这引入了欺骗和幻觉等风险。在本工作中,我们提出了原型Transformer(ProtoT)——一种基于原型(参数向量)的自回归语言模型架构,作为标准基于自注意力的Transformer的替代方案。ProtoT通过输入序列与原型之间的双向通信进行工作,我们证明这导致原型在训练过程中自动捕获可命名的概念(例如“女性”)。它们为解释模型的推理提供了可能性,并允许对其行为进行针对性编辑。此外,通过设计,原型创建了在不同时间尺度上聚合上下文信息的通信通道,有助于可解释性。在计算可扩展性方面,ProtoT随序列长度呈线性扩展,而最先进的自注意力Transformer呈二次方扩展。与基线模型相比,ProtoT在模型和数据规模上具有良好的扩展性,并在文本生成和下游任务(GLUE)上表现良好。ProtoT对输入扰动的鲁棒性与某些基线相当或更优,但其不同之处在于提供了可解释的路径,展示了鲁棒性和敏感性如何产生。在性能上接近最先进架构的同时,ProtoT为创建性能良好且具有可解释性设计的自回归语言模型铺平了道路。