Large Language Models (LLMs) have emerged as highly capable systems and are increasingly being integrated into various uses. However, the rapid pace of their deployment has outpaced a comprehensive understanding of their internal mechanisms and a delineation of their capabilities and limitations. A desired attribute of an intelligent system is its ability to recognize the scope of its own knowledge. To investigate whether LLMs embody this characteristic, we develop a benchmark designed to challenge these models to enumerate all information they possess on specific topics. This benchmark evaluates whether the models recall excessive, insufficient, or the precise amount of information, thereby indicating their awareness of their own knowledge. Our findings reveal that all tested LLMs, given sufficient scale, demonstrate an understanding of how much they know about specific topics. While different architectures exhibit varying rates of this capability's emergence, the results suggest that awareness of knowledge may be a generalizable attribute of LLMs. Further research is needed to confirm this potential and fully elucidate the underlying mechanisms.
翻译:大型语言模型(LLMs)已成为能力卓越的系统,并日益融入各类应用场景。然而,其部署速度已超越了对内部机制的全面理解及能力与局限的明确界定。智能系统的一个理想属性是能够识别自身知识的边界。为探究LLMs是否具备这一特性,我们开发了一个基准测试,旨在挑战这些模型枚举其关于特定主题所掌握的全部信息。该基准通过评估模型是否回忆过多、不足或精确数量的信息,从而揭示其对自身知识的认知程度。研究发现,所有经过测试的LLMs在达到足够规模时,均表现出对特定主题知识量的理解能力。尽管不同架构模型展现该能力的速度存在差异,但结果表明知识认知可能是LLMs的泛化属性。未来仍需进一步研究以确认这一潜力并全面阐明其内在机制。