Standard attention stores keys/values losslessly but reads them via a per-head convex average, blocking channel-wise selection. We propose the Free Energy Mixer (FEM): a free-energy (log-sum-exp) read that applies a value-driven, per-channel log-linear tilt to a fast prior (e.g., from queries/keys in standard attention) over indices. Unlike methods that attempt to improve and enrich the $(q,k)$ scoring distribution, FEM treats it as a prior and yields a value-aware posterior read at unchanged complexity, smoothly moving from averaging to per-channel selection as the learnable inverse temperature increases, while still preserving parallelism and the original asymptotic complexity ($O(T^2)$ for softmax; $O(T)$ for linearizable variants). We instantiate a two-level gated FEM that is plug-and-play with standard and linear attention, linear RNNs and SSMs. It consistently outperforms strong baselines on NLP, vision, and time-series at matched parameter budgets.
翻译:标准注意力机制以无损方式存储键/值,但通过每个头部的凸平均进行读取,这阻碍了通道级选择。我们提出自由能混合器(FEM):一种自由能(对数求和指数)读取机制,它将对索引的快速先验(例如,来自标准注意力中的查询/键)施加一个由值驱动的、逐通道的对数线性倾斜。与试图改进和丰富$(q,k)$评分分布的方法不同,FEM将其视为先验,并以不变的复杂度产生一个值感知的后验读取,随着可学习的逆温度参数增加,平滑地从平均过渡到逐通道选择,同时仍保持并行性和原有的渐近复杂度(对于softmax为$O(T^2)$;对于可线性化变体为$O(T)$)。我们实例化了一个两级门控FEM,它可以即插即用地与标准及线性注意力、线性RNN和SSM结合使用。在参数预算匹配的情况下,它在NLP、视觉和时间序列任务上持续优于强基线模型。