Model stealing, where a learner tries to recover an unknown model via carefully chosen queries, is a critical problem in machine learning, as it threatens the security of proprietary models and the privacy of data they are trained on. In recent years, there has been particular interest in stealing large language models (LLMs). In this paper, we aim to build a theoretical understanding of stealing language models by studying a simple and mathematically tractable setting. We study model stealing for Hidden Markov Models (HMMs), and more generally low-rank language models. We assume that the learner works in the conditional query model, introduced by Kakade, Krishnamurthy, Mahajan and Zhang. Our main result is an efficient algorithm in the conditional query model, for learning any low-rank distribution. In other words, our algorithm succeeds at stealing any language model whose output distribution is low-rank. This improves upon the previous result by Kakade, Krishnamurthy, Mahajan and Zhang, which also requires the unknown distribution to have high "fidelity", a property that holds only in restricted cases. There are two key insights behind our algorithm: First, we represent the conditional distributions at each timestep by constructing barycentric spanners among a collection of vectors of exponentially large dimension. Second, for sampling from our representation, we iteratively solve a sequence of convex optimization problems that involve projection in relative entropy to prevent compounding of errors over the length of the sequence. This is an interesting example where, at least theoretically, allowing a machine learning model to solve more complex problems at inference time can lead to drastic improvements in its performance.
翻译:模型窃取是指学习者通过精心设计的查询尝试恢复未知模型的过程,这是机器学习中的一个关键问题,因为它威胁到专有模型的安全性以及训练数据的隐私性。近年来,针对大型语言模型(LLMs)的窃取行为引起了特别关注。本文旨在通过研究一个简单且数学上易于处理的设定,建立对窃取语言模型的理论理解。我们研究了隐马尔可夫模型(HMMs)以及更一般的低秩语言模型的窃取问题。我们假设学习者在Kakade、Krishnamurthy、Mahajan和Zhang提出的条件查询模型中工作。我们的主要成果是在条件查询模型中提出了一种高效算法,用于学习任意低秩分布。换言之,我们的算法能够成功窃取任何输出分布为低秩的语言模型。这改进了Kakade、Krishnamurthy、Mahajan和Zhang先前的结果,他们的方法还要求未知分布具有高“保真度”——这一性质仅在受限情况下成立。我们的算法背后有两个关键洞见:首先,我们通过在指数维度的向量集合中构造重心生成基来表示每个时间步的条件分布。其次,为了从我们的表示中进行采样,我们迭代求解一系列涉及相对熵投影的凸优化问题,以防止误差在序列长度上累积放大。这是一个有趣的案例,至少在理论上表明,允许机器学习模型在推理阶段解决更复杂的问题可以显著提升其性能。