We frame embedding inversion as conditional masked diffusion, recovering all tokens in parallel through iterative denoising rather than sequential autoregressive generation. A masked diffusion language model is conditioned on the target embedding via adaptive layer normalization, requiring only 8 forward passes through a 78M parameter model with no access to the target encoder. On 32-token sequences across three embedding models, the method achieves up to 81.3% token accuracy. Source code and live demo are available at https://github.com/jina-ai/embedding-inversion-demo.
翻译:我们将嵌入反演问题构建为条件掩码扩散过程,通过迭代去噪而非顺序自回归生成的方式并行恢复所有词元。该方法通过自适应层归一化将掩码扩散语言模型以目标嵌入为条件,仅需对一个7800万参数模型进行8次前向传播,且无需访问目标编码器。在三种嵌入模型上对32词元序列进行的实验表明,该方法最高可实现81.3%的词元准确率。源代码与实时演示可通过 https://github.com/jina-ai/embedding-inversion-demo 获取。