While natural language is the de facto communication medium for LLM-based agents, it presents a fundamental constraint. The process of downsampling rich, internal latent states into discrete tokens inherently limits the depth and nuance of information that can be transmitted, thereby hindering collaborative problem-solving. Inspired by human mind-reading, we propose Interlat (Inter-agent Latent Space Communication), a paradigm that leverages the last hidden states of an LLM as a representation of its mind for direct transmission (termed latent communication). An additional compression process further compresses latent communication via entirely latent space reasoning. Experiments demonstrate that Interlat outperforms both fine-tuned chain-of-thought (CoT) prompting and single-agent baselines, promoting more exploratory behavior and enabling genuine utilization of latent information. Further compression not only substantially accelerates inference but also maintains competitive performance through an efficient information-preserving mechanism. We position this work as a feasibility study of entirely latent space inter-agent communication, and our results highlight its potential, offering valuable insights for future research.
翻译:尽管自然语言是基于大语言模型(LLM)的智能体之间事实上的通信媒介,但它存在一个根本性限制。将丰富的内部潜在状态下采样为离散标记的过程,本质上限制了可传输信息的深度和细微差别,从而阻碍了协作问题解决。受人类心智理论启发,我们提出 Interlat(智能体间潜在空间通信)范式,该范式利用 LLM 的最后一个隐藏状态作为其心智表征进行直接传输(称为潜在通信)。一个额外的压缩过程通过完全在潜在空间中进行推理,进一步压缩潜在通信。实验表明,Interlat 在性能上优于微调的思维链(CoT)提示和单智能体基线,促进了更具探索性的行为,并实现了对潜在信息的真正利用。进一步压缩不仅显著加速了推理,还通过高效的信息保留机制保持了有竞争力的性能。我们将此项工作定位为对完全潜在空间智能体间通信的可行性研究,我们的结果凸显了其潜力,为未来研究提供了有价值的见解。