We study the problem of learning Transformer-based sequence models with black-box access to their outputs. In this setting, a learner may adaptively query the oracle with any sequence of vectors and observe the corresponding real-valued output. We begin with the simplest case, a single-head softmax-attention regressor. We show that for a model with width $d$, there is an elementary algorithm to learn the parameters of single-head attention exactly with $O(d^2)$ queries. Further, we show that if there exists an algorithm to learn ReLU feedforward networks (FFNs), then the single-head algorithm can be easily adapted to learn one-layer Transformers with single-head attention. Next, motivated by the regime where the head dimension $r \ll d$, we provide a randomised algorithm that learns single-head attention-based models with $O(rd)$ queries via compressed sensing arguments. We also study robustness to noisy oracle access, proving that under mild norm and margin conditions, the parameters can be estimated to $\varepsilon$ accuracy with a polynomial number of queries even when outputs are only provided up to additive tolerance. Finally, we show that multi-head attention parameters are not identifiable from value queries in general -- distinct parameterisations can induce the same input-output map. Hence, guarantees analogous to the single-head setting are impossible without additional structural assumptions.
翻译:我们研究基于Transformer的序列模型学习问题,其中学习者仅能通过黑盒方式访问模型输出。在此设定下,学习者可以自适应地向预言机输入任意向量序列,并观察相应的实值输出。我们首先从最简单的情况——单头softmax注意力回归器——入手进行研究。我们证明,对于宽度为$d$的模型,存在一种基础算法能够以$O(d^2)$次查询精确学习单头注意力的参数。进一步,我们证明若存在学习ReLU前馈网络(FFN)的算法,则单头注意力算法可轻松适配于学习具有单头注意力机制的单层Transformer。接着,针对头维度$r \ll d$的典型场景,我们通过压缩感知论证提出一种随机算法,能以$O(rd)$次查询学习基于单头注意力的模型。我们还研究了噪声预言机访问的鲁棒性,证明在温和的范数和间隔条件下,即使输出仅提供至加性容差范围内,仍可通过多项式次数的查询将参数估计至$\varepsilon$精度。最后,我们证明多头注意力参数在一般情况下无法通过值查询实现可辨识性——不同的参数化可能诱导相同的输入输出映射。因此,若没有额外的结构假设,则无法获得类似于单头注意力场景的理论保证。