We present an approach for enhancing non-playable characters (NPCs) in games by combining large language models (LLMs) with computer vision to provide contextual awareness of their surroundings. Conventional NPCs typically rely on pre-scripted dialogue and lack spatial understanding, which limits their responsiveness to player actions and reduces overall immersion. Our method addresses these limitations by capturing panoramic images of an NPC's environment and applying semantic segmentation to identify objects and their spatial positions. The extracted information is used to generate a structured JSON representation of the environment, combining object locations derived from segmentation with additional scene graph data within the NPC's bounding sphere, encoded as directional vectors. This representation is provided as input to the LLM, enabling NPCs to incorporate spatial knowledge into player interactions. As a result, NPCs can dynamically reference nearby objects, landmarks, and environmental features, leading to more believable and engaging gameplay. We describe the technical implementation of the system and evaluate it in two stages. First, an expert interview was conducted to gather feedback and identify areas for improvement. After integrating these refinements, a user study was performed, showing that participants preferred the context-aware NPCs over a non-context-aware baseline, confirming the effectiveness of the proposed approach.
翻译:我们提出了一种方法,通过结合大语言模型(LLMs)与计算机视觉技术,增强游戏中非玩家角色(NPC)对周围环境的上下文感知能力。传统NPC通常依赖预编写对话且缺乏空间理解,这限制了其对玩家行为的响应能力并降低了整体沉浸感。我们的方法通过采集NPC环境的全景图像,应用语义分割识别物体及其空间位置来突破这些限制。提取的信息用于生成环境的结构化JSON表示,将分割得到的物体位置与NPC包围球内编码为方向向量的额外场景图数据相结合。将该表示作为输入提供给LLM,使NPC能够将空间知识融入玩家交互。由此,NPC可动态引用附近物体、地标及环境特征,从而实现更可信且更具吸引力的游戏体验。我们描述了该系统的技术实现,并通过两个阶段进行评估:首先开展专家访谈以收集反馈并确定改进方向;在集成这些优化后,进行用户研究,结果表明参与者更偏好具备上下文感知能力的NPC(相对于无上下文感知的基线方案),从而验证了所提方法的有效性。