Starting from the hypothesis that knowledge in semantic space is organized along structured manifolds, we argue that this geometric structure renders the space explorable. By traversing it and using the resulting continuous representations to condition an LLM's generation distribution, we can systematically expand the model's reachable semantic range. We introduce a framework that requires no modification of LLM parameters and operationalizes this idea by constructing a conditioning distribution from a small set of diverse anchor generations. This distribution conditions LLM's generation via an xRAG-style projector. Our experiments demonstrate that this manifold-based conditioning substantially increases generative diversity, with direct benefits for enhancing divergent thinking, a core facet of creativity, in language models.
翻译:从语义空间中知识沿结构化流形组织的假设出发,我们认为这种几何结构使得该空间具有可探索性。通过遍历该空间并利用生成的连续表示来调节大型语言模型(LLM)的生成分布,我们能够系统性地扩展模型可达的语义范围。我们提出一个无需修改LLM参数的框架,通过从少量多样化锚点生成构建调节分布来实现这一理念。该分布通过xRAG风格投影器对LLM生成过程进行调节。实验表明,这种基于流形的调节方法能显著提升生成多样性,对增强语言模型中发散性思维(创造力的核心维度)具有直接助益。